Files
3engines_doc/site/search/search_index.json
2025-06-19 21:50:45 +05:30

1 line
1.0 MiB

{"config":{"lang":["en"],"separator":"[\\s\\-,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Welcome to 3Engines Cloud Documentation \ud83c\udf10","text":"<p>3Engines Cloud delivers robust, scalable, and secure cloud solutions for modern enterprises. Dive into our comprehensive documentation to kickstart your journey, manage resources efficiently, and optimize your cloud experience! \ud83d\ude80</p>"},{"location":"index.html#quick-start","title":"Quick Start \u26a1","text":"<p>Get up and running with these essential guides: - Cloud Overview \u2601\ufe0f - Data Volume Management \ud83d\udcbe - Networking \ud83c\udf0d - S3 Storage \ud83d\uddc4\ufe0f - Windows Management \ud83d\udda5\ufe0f - Release Notes \ud83d\udcdd</p>"},{"location":"index.html#documentation-sections","title":"Documentation Sections \ud83d\udcda","text":""},{"location":"index.html#cloud","title":"Cloud \u2601\ufe0f","text":"<p>Explore guides for managing and utilizing cloud resources in Cloud Overview.</p>"},{"location":"index.html#data-volume","title":"Data Volume \ud83d\udcbe","text":"<p>Master attaching, managing, and backing up volumes with Data Volume Management.</p>"},{"location":"index.html#networking","title":"Networking \ud83c\udf0d","text":"<p>Set up networking, SSH, floating IPs, and more in Networking.</p>"},{"location":"index.html#s3-storage","title":"S3 Storage \ud83d\uddc4\ufe0f","text":"<p>Learn about object storage, S3 tools, and usage in S3 Storage.</p>"},{"location":"index.html#windows-management","title":"Windows Management \ud83d\udda5\ufe0f","text":"<p>Discover guides for Windows VMs and remote access in Windows Management.</p>"},{"location":"index.html#release-notes","title":"Release Notes \ud83d\udcdd","text":"<p>Stay updated with the latest changes and features in Release Notes.</p>"},{"location":"index.html#support","title":"Support \ud83d\udce7","text":"<p>Need assistance? Our dedicated support team is here to help! - General Support: support@rootxwire.com - Admin Inquiries: admin@rootxwire.com We\u2019re committed to ensuring your success on 3Engines Cloud! \ud83c\udf1f</p> <p>Pro Tip: Use the side navigation to explore all topics. Expand sections to uncover detailed guides tailored to your needs! \ud83e\udded</p>"},{"location":"index.html#contents","title":"Contents","text":""},{"location":"index.html#cloud_1","title":"CLOUD \u2601\ufe0f","text":"<ul> <li>Dashboard Overview \u2013 Project Quotas And Flavors Limits on 3Engines Cloud</li> <li>How to access the VM from OpenStack console on 3Engines Cloud</li> <li>How to clone existing and configured VMs on 3Engines Cloud</li> <li>How to fix unresponsive console issue on 3Engines Cloud</li> <li>How to generate and manage EC2 credentials on 3Engines Cloud</li> <li>How to generate or use Application Credentials via CLI on 3Engines Cloud</li> <li>How to Use GUI in Linux VM on 3Engines Cloud and access it From Local Linux Computer</li> <li>How To Create a New Linux VM With NVIDIA Virtual GPU in the OpenStack Dashboard Horizon on 3Engines Cloud</li> <li>How to install and use Docker on Ubuntu 24.04</li> <li>How to use Security Groups in Horizon on 3Engines Cloud</li> <li>How to create key pair in OpenStack Dashboard on 3Engines Cloud</li> <li>How to create new Linux VM in OpenStack Dashboard Horizon on 3Engines Cloud</li> <li>How to install Python virtualenv or virtualenvwrapper on 3Engines Cloud</li> <li>How to start a VM from a snapshot on 3Engines Cloud</li> <li>Status Power State and dependencies in billing of instance VMs on 3Engines Cloud</li> <li>How to upload your custom image using OpenStack CLI on 3Engines Cloud</li> <li>VM created with option Create New Volume No on 3Engines Cloud</li> <li>VM created with option Create New Volume Yes on 3Engines Cloud</li> <li>What is an OpenStack domain on 3Engines Cloud</li> <li>What is an OpenStack project on 3Engines Cloud</li> <li>How to create a Linux VM and access it from Windows desktop on 3Engines Cloud</li> <li>How to create a Linux VM and access it from Linux command line on 3Engines Cloud</li> <li>DNS as a Service on 3Engines Cloud Hosting</li> <li>What Image Formats are Available in OpenStack 3Engines Cloud cloud</li> <li>How to upload custom image to 3Engines Cloud cloud using OpenStack Horizon dashboard</li> <li>How to create Windows VM on OpenStack Horizon and access it via web console on 3Engines Cloud</li> <li>How to transfer volumes between domains and projects using Horizon dashboard on 3Engines Cloud</li> <li>Spot instances on 3Engines Cloud</li> <li>How to create instance snapshot using Horizon on 3Engines Cloud</li> <li>How to start a VM from instance snapshot using Horizon dashboard on 3Engines Cloud</li> <li>How to create a VM using the OpenStack CLI client on 3Engines Cloud cloud</li> <li>OpenStack User Roles on 3Engines Cloud</li> <li>Resizing a virtual machine using OpenStack Horizon on 3Engines Cloud</li> <li>Block storage and object storage performance limits on 3Engines Cloud</li> </ul>"},{"location":"index.html#data-volume_1","title":"DATA VOLUME \ud83d\udcbe","text":"<ul> <li>How to attach a volume to VM less than 2TB on Linux on 3Engines Cloud</li> <li>How to attach a volume to VM more than 2TB on Linux on 3Engines Cloud</li> <li>Ephemeral vs Persistent storage option Create New Volume on 3Engines Cloud</li> <li>How to export a volume over NFS on 3Engines Cloud</li> <li>How to export a volume over NFS outside of a project on 3Engines Cloud</li> <li>How to extend the volume in Linux on 3Engines Cloud</li> <li>How to mount object storage in Linux on 3Engines Cloud</li> <li>How to move data volume between two VMs using OpenStack Horizon on 3Engines Cloud</li> <li>How many objects can I put into Object Storage container bucket on 3Engines Cloud</li> <li>How to create volume Snapshot and attach as Volume on Linux or Windows on 3Engines Cloud</li> <li>Volume snapshot inheritance and its consequences on 3Engines Cloud</li> <li>How to Create Backup of Your Volume From Windows Machine on 3Engines Cloud</li> <li>How To Attach Volume To Windows VM On 3Engines Cloud</li> <li>How to create or delete volume snapshot on 3Engines Cloud</li> <li>How to restore volume from snapshot on 3Engines Cloud</li> <li>Bootable versus non-bootable volumes on 3Engines Cloud</li> </ul>"},{"location":"index.html#networking_1","title":"NETWORKING \ud83c\udf0d","text":"<ul> <li>How can I access my VMs using names instead of IP addresses on 3Engines Cloud</li> <li>How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud</li> <li>Cannot access VM with SSH or PING on 3Engines Cloud</li> <li>Cannot ping VM on 3Engines Cloud</li> <li>How to connect to your virtual machine via SSH in Linux on 3Engines Cloud</li> <li>How to create a network with router in Horizon Dashboard on 3Engines Cloud</li> <li>How can I open new ports for http for my service or instance on 3Engines Cloud</li> <li>Generating an SSH keypair in Linux on 3Engines Cloud</li> <li>How to add SSH key from Horizon web console on 3Engines Cloud</li> <li>How is my VM visible in the internet with no Floating IP attached on 3Engines Cloud</li> <li>How to run and configure Firewall as a service and VPN as a service on 3Engines Cloud</li> <li>How to import SSH public key to OpenStack Horizon on 3Engines Cloud</li> </ul>"},{"location":"_images/eefa_logged_in_creodias.png.html","title":"Eefa logged in creodias.png","text":"<p>None</p>"},{"location":"_images/eefa_mobile_auth_setup_creodias.png.html","title":"Eefa mobile auth setup creodias.png","text":"<p>None</p>"},{"location":"_images/eefa_qr_screen_creodias.png.html","title":"Eefa qr screen creodias.png","text":"<p>None</p>"},{"location":"_images/eefa_restart_login_creodias.png.html","title":"Eefa restart login creodias.png","text":"<p>None</p>"},{"location":"_images/eefa_several_rows.png.html","title":"Eefa several rows.png","text":"<p>None</p>"},{"location":"_images/eefa_sign_regular_creodias.png.html","title":"Eefa sign regular creodias.png","text":"<p>None</p>"},{"location":"_images/eefa_start_creodias.png.html","title":"Eefa start creodias.png","text":"<p>None</p>"},{"location":"_images/eefa_tapped.png.html","title":"Eefa tapped.png","text":"<p>None</p>"},{"location":"_images/freeotp_icon_to_select.png.html","title":"Freeotp icon to select.png","text":"<p>None</p>"},{"location":"_images/freeotp_tapped_number.png.html","title":"Freeotp tapped number.png","text":"<p>None</p>"},{"location":"_images/new_docker-1.png.html","title":"New docker 1.png","text":"<p>None</p>"},{"location":"_images/otp01.png.html","title":"Otp01.png","text":"<p>None</p>"},{"location":"_images/otp02.png.html","title":"Otp02.png","text":"<p>None</p>"},{"location":"_images/otp03.png.html","title":"Otp03.png","text":"<p>None</p>"},{"location":"_images/otp04.png.html","title":"Otp04.png","text":"<p>None</p>"},{"location":"_images/otp05.png.html","title":"Otp05.png","text":"<p>None</p>"},{"location":"_images/otp07.png.html","title":"Otp07.png","text":"<p>None</p>"},{"location":"_images/otp08.png.html","title":"Otp08.png","text":"<p>None</p>"},{"location":"_images/otp09.png.html","title":"Otp09.png","text":"<p>None</p>"},{"location":"_images/register_3Enginescloud1.png.html","title":"register 3Enginescloud1.png","text":"<p>None</p>"},{"location":"accountmanagement/Adding-Editing-Organizations.html.html","title":"Adding and editing Organization\ud83d\udd17","text":"<p>After logging into https://portal.3Engines.com/ press Organization button on the left bar menu.</p> <p></p> <p>In My Organization tab you can register an organization and become its administrator or join an organization if you have invitation code provided by it\u2019s administrator.</p> <p>To register new organization please fill up all fields marked with * and press Register Organization button. Once you register your organization you will be able to view and edit details.</p> <p>TAX ID / VAT field is not required but without providing the data you wont be able to complete automatic order or start new contract. VAT field is required if you need to receive invoice with correct tax rate.</p> <p>After registration, please go back to the left bar menu and select Organization.</p> <p>In My Organization tab you will be able to:</p> <ul> <li>check organization registration date</li> <li>view and edit organization name, address and TAX ID / VAT number</li> <li>manage assignments</li> </ul>"},{"location":"accountmanagement/Contracts-Wallets.html.html","title":"Wallets and Contracts Management\ud83d\udd17","text":"<p>After logging into https://portal.3Engines.com/ press Wallets/Contracts button on the left menu bar:</p> <p></p> <p>you will see the following 3 billing modes:</p> <p>PPUSE (Pay Per Use Wallet)</p> <p>This is a prepaid billing mode where services are billed according to usage. Tenant purchases a credit in a form of Billing Units (BU) with are added to his system wallet and are used to provision and keep resources and services. Billing Units (BU) are evaluated to 1 Euro for the sake of Price List. Billing Units (BU) are purchased through e-commerce or written contracts. Every 2 h tenant credit is decreased with the cost of used resources billed with an accuracy of up to 2 seconds. This is a very flexible mode that allows to create and remove resources at will and pay only for the used resources. This is a mode useful for experimental and development work or for environments with very variable resources.</p> <p>To add an additional wallet please click Add wallet button on the top. To add funds to your already created wallet you need to use Transfer funds option. Please note that a wallet can be deleted once all funds are transferred to another wallet. You can check your billing by clicking Billing report button.</p> <p>PAYG (Pay As You Go Contract)</p> <p>This is a postpaid billing mode where Tenants are invoiced periodically based on actual usage. In this mode, a Tenant signs a written contract and is billed usually on monthly bases for actual usage of services and resources. PAYG contracts are purchased only through our sales department in a form of written contracts. With the exception of FIXED-TERM orders all services and resources ordered under Accounts/Projects that are attached to the PAYG contract will be added to the invoice issued at the end of agreed period. Billing is done every 2 hours. Tenants sees the increase in usage with the cost of used resources billed with an accuracy of up to 2 seconds.</p> <p>To add PAYG wallet you need to raise a ticket first (please check Helpdesk and Support).</p> <p>FIXED-TERM (Fixed Term Contract)</p> <p>This is a billing mode where services are bought for longer periods. In this mode, Tenant purchases defined services for defined periods. FIXED-TERM contracts are purchased through e-commerce or written agreements. The long-term resources are paid directly according to price and currency stated. One cannot then change these resources but on the other hand obtains much cheaper offering. This mode is preferable for long-term usage of well-defined environments with well understood needs.</p> <p>In order to successfully apply FIXED-TERM billing mode to given service in particular billing session, the contract is required to fulfill the following conditions:</p> <ul> <li>Be active at the beginning of the session or at the moment when resource is launched;</li> <li>Be active till the end of the 2 hours billing session.</li> </ul> <p>If any of the conditions listed above is not met, service is to be billed in PPU/PAYG mode.</p> <p>As a result of described billing system behavior, if contract is activated after launching services, FIXED-TERM billing mode is to be applied to them at the beginning of next billing session and not immediately.</p> <p>To check how to add wallet to specific project please visit /accountmanagement/Accounts-and-Projects-Management.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html","title":"Cookie consent on 3Engines Cloud\ud83d\udd17","text":"<p>A cookie is a small text file that your browser stores in local environment and later uses to track or recognize your activities on the site.</p> <p>Cookies are an essential tool for the remote site to deliver the best possible user experience. The downside for the user is the potential loss of online privacy which may, among other reasons, be caused by</p> <ul> <li>the site itself (if it uses its own cookies in a way that is detrimental to the user),</li> <li>by many other sites that see available cookies and decide to gather reconnaissance about your surfing activities.</li> </ul>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#introducing-cookiebot-site","title":"Introducing Cookiebot site\ud83d\udd17","text":"<p>3Engines Cloud is using Cookiebot software to manage cookies consent from the user. It will show you all of the cookies that your browser is storing and you will be able to choose which types of cookies should 3Engines Cloud take into account. Both Cookiebot and 3Engines Cloud site are GDPR compliant, however, 3Engines Cloud also has its own Privacy Policy in effect.</p> <p>Of particular relevance is Cookiebot page Logging and demonstration of user consents.</p> <p>Note</p> <p>You can directly interfere with cookies from your browser, operating system, network or VPN access software. This boils down to detecting, showing, hiding, tracking or removing access to certain types of cookies and so on. These methods are, however, out of scope of this article.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#cookiebot-window","title":"Cookiebot window\ud83d\udd17","text":"<p>This is the Cookiebot window on 3Engines Cloud:</p> <p></p> <p>You will see it when visiting one of these sites for the first time:</p> <ul> <li>the main site itself, https://3Engines.com,</li> <li>in AI platform https://sherlock.3Engines.com,</li> <li>on ecommerce page, https://ecommerce.3Engines.com, or in</li> <li>the dashboard, https://portal.3Engines.com.</li> </ul> <p>Cookiebot is interactive and you can change your cookies preferences while using the site. If the consent for using cookies was withdrawn, you are also going to see the same starting Cookiebot window when visiting these sites after the change.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#option-allow-all","title":"Option Allow all\ud83d\udd17","text":"<p>Click on button Allow all will do what it says \u2013 the site will record all types of cookies and, consequently, to track your behaviour completely. This option will unleash the full power of the site and you will always be able to use all of its capabilities. For you as the user, it is also the easiest and fastest way of dealing with cookies on the site.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#details-view-of-available-cookies","title":"Details view of available cookies\ud83d\udd17","text":"<p>To see the cookies that you can give your consent to, click on Details.</p> <p></p> <p>There are five types of cookies and you may need to scroll down to see them all.</p> <p>When shown for the first time, the left button will contain label Deny. Choosing it will turn off all of the cookies apart from the Necessary cookie type, which by default cannot be turned off. If you do not like the fact that that is the default, refrain from using the site.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#necessary-cookies","title":"Necessary cookies\ud83d\udd17","text":"<p>This is the most basic type of cookie and the site presumes you have already given consent to it. That is why the check button to the right of the row, , is already set to \u201cON\u201d. Technically, you can try to remove the consent by clicking on that button, but you will be met with a message like this:</p> <p></p> <p>You can also see additional details about that cookie type and the cookies it contains. By clicking on the name of the cookie, you will be able to see from which company it is, what it looks like and so on.</p> <p></p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#the-number-of-cookies-shown-per-category","title":"The number of cookies shown per category\ud83d\udd17","text":"<p>The number of cookies that Cookiebot is showing may vary wildly and will be increased if the sites you visit are using:</p> <ul> <li>interactive elements, such as chat widgets, embedded maps, videos etc.</li> <li>advertising networks,</li> <li>analytics tools,</li> <li>CDN (Content Delivery Network),</li> <li>recommendations,</li> <li>affiliate marketing,</li> <li>testing procedures</li> </ul> <p>and so on.</p> <p>Some large content sites may use up to 30-40 cookies per visitor \u2013 that alone will increase the general number of cookies you see through Cookiebot.</p> <p>If you delete some or all cookies, perhaps using the browser of your choice, the numbers the Cookiebot will show will be almost zero (but with each visit to another site or sites, that number is almost sure to grow).</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#preferences-cookie-type","title":"Preferences cookie type\ud83d\udd17","text":"<p>Enabling this cookie permits the site to store the preferences such as preferred language or region you are in.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#statistics-cookie-type","title":"Statistics cookie type\ud83d\udd17","text":"<p>For storing anonymized statistics. In spite of your data being stored in the background of the site, these cookies will not be revealed to third parties (unless forced by law).</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#marketing-cookie-type","title":"Marketing cookie type\ud83d\udd17","text":"<p>Used to create user profiles to send advertising. If you opt out of this cookie type, you may miss some new features of the site or, eventually, miss on promotional campaigns, sales offers and so on.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#unclassified-cookie-type","title":"Unclassified cookie type\ud83d\udd17","text":"<p>All other types of cookies, if any, that have not been classified as yet.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#how-to-give-consent-to-cookie-types","title":"How to give consent to cookie types\ud83d\udd17","text":"<p>Click on toggle button on the right side of the form window and when you finish selecting, click on Allow selection to confirm, or again, click on Allow all to activate all of them.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#about-cookie-consent","title":"About cookie consent\ud83d\udd17","text":"<p>This option explains what cookies are and also provides links to Privacy Policy and, more specifically, to Cookie Policy.</p> <p></p> <p>You can still change cookie consent by clicking on Customize, which will lead you back to Details tab (already explained above).</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#selecting-the-cookies-preferences","title":"Selecting the cookies preferences\ud83d\udd17","text":"<p>Once you click either Allow selection or Allow all buttons, the form will disappear and your selection will be fixed. To change it, click on icon in the lower left browser window corner.</p> <p></p> <p>A smaller window will appear:</p> <p></p> <p>Clicking on Withdraw your consent, all types of cookies will be annulled except the necessary one.</p> <p>Button Change your consent will lead to the Details tab we already discussed, where you will be able to edit your cookies preferences.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#what-the-consent-data-look-like","title":"What the consent data look like\ud83d\udd17","text":"<p>To see what your consent data look like, click on Show details:</p> <p></p> <p>Each consent you give to the site, generates a unique consent ID, which, together with the time and date, you can see in the image above. Consent ID is random, anonymous, encrypted and unique. In that way, user anonymity is preserved while the site is still in a position to conclude whether the consent was actually provided or not.</p> <p>The cookie is saved on backend servers for 12 months. It is also saved in your browser so that the website can automatically read and respect the user\u2019s consent on all subsequent page requests.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#troubleshooting","title":"Troubleshooting\ud83d\udd17","text":"<p>You can see the contents of the cookie file through various browser options and also through a file viewer on your desktop computer. It is quite possible (but not at all advisable) to delete the cookie file outside of the browser. In particular, deleting the entire cookie by force will also delete the necessary part of the cookie. You may, then, lose access to the site, be forced to contact Helpdesk and Support and so on.</p>"},{"location":"accountmanagement/Cookie-consent-on-3Engines-Cloud.html.html#setting-up-cookies-on-3engines-cloud-subdomains","title":"Setting up cookies on 3Engines Cloud subdomains\ud83d\udd17","text":"<p>Cookiebot procedures are exactly the same on subdomains or the dashboard.</p> <p>Here is what cookie consent window will look like, for example, on https://ecommerce.3Engines.com/:</p> <p></p> <p>Set the cookies up by clicking on icon in the lover left part of the browser window.</p> <p></p>"},{"location":"accountmanagement/Editing-Profile.html.html","title":"Editing profile\ud83d\udd17","text":"<p>After logging into https://portal.3Engines.com/ press My Profile button on the left bar menu.</p> <p></p> <p>In My Profile tab you will be able to:</p> <ul> <li>check your email address and registration date</li> <li>view and edit your name and country</li> <li>change your password</li> <li>view and edit accepted agreements</li> </ul>"},{"location":"accountmanagement/Forgotten-Password.html.html","title":"Forgotten Password\ud83d\udd17","text":"<p>Go to the login page and click on Forgot Password button.</p> <p></p> <p>Enter your email address into the field and press Submit button. Check your mailbox for further steps.</p> <p>Open the link from email and set up a new password.</p> <p>After that, click Submit button.</p> <p></p> <p>If you haven\u2019t received a message, check your SPAM folder. If you forgot the email address or the message can\u2019t be delivered successfully, please contact our Support Team.</p>"},{"location":"accountmanagement/Help-Desk-And-Support.html.html","title":"Helpdesk and Support\ud83d\udd17","text":"<p>After logging into https://portal.3Engines.com/ press the Tickets button on the left menu bar to create or manage your tickets.</p> <p></p> <p>There are few tabs available in Tickets menu:</p> <ul> <li>ALL - allows you to view all your tickets</li> <li>OPEN - shows your open tickets</li> <li>CLOSED - contains list of closed tickets</li> </ul> <p>As it is shown on the above picture, all tickets are categorized by Key, Topic, Status, Type, Created date and Last update date. You can sort your tickets by Type. For this purpose choose Support, Problems, Sales, Billing and Accounting, from the top drop down list. To check details or add a comment to existing tickets, please use Show details option on the right side of the window.</p> <p>If you want to create a new ticket, press Add ticket button on the top of the side.</p> <p></p> <p>Choose proper category, add Summary, describe the issue and press Create request button. Once you press the button ticket will be visible in the OPEN tab.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html","title":"How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication\ud83d\udd17","text":""},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#one-factor-and-two-factor-authentication-for-activating-command-line-access-to-the-cloud","title":"One-factor and two-factor authentication for activating command line access to the cloud\ud83d\udd17","text":"<p>To log into a site, you usually provide user name and email address during the creation of the account and then you use those same data to enter the site. You provide that data once and that is why it is called \u201cone-factor\u201d authentication. Two-factor authentication requires the same but considers it to be only the first step; on 3Engines Cloud cloud, the second step is</p> <ul> <li>to generate six-digit code using the appropriate software and then to</li> <li>send it to the cloud as a means of additional certification.</li> </ul> <p>Cloud parameters for authentication and, later, OpenStack CLI access, are found in a so-called RC file. This article will help you download and use it to first authenticate and then access the cloud using OpenStack CLI commands.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>How to download the RC file</li> <li>Adjusting the name of the downloaded RC file</li> <li>The contents of the downloaded RC file</li> <li>How to activate the downloaded RC file</li> <li>One factor authentication</li> <li>Two factor authentication</li> <li>Testing the connection</li> <li>Resolving errors</li> </ul>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 2FA</p> <p>If your account has 2FA enabled (which you will recognize from the respective prompt when authenticating), you need to install and configure a piece of software which generates six-digit codes used for 2FA. To set that up, follow one of these articles, depending on the type of device you are using:</p> <ul> <li>Mobile device (Android, iOS): Two-Factor Authentication to 3Engines Cloud site using mobile application</li> <li>Computer Two-Factor Authentication to 3Engines Cloud site using KeePassXC on desktop</li> </ul> <p>No. 3 OpenStackClient installed and available</p> <p>Installing OpenStackClient on various platforms will also install the ability to run the .sh files. Since OpenStack is written in Python, it is recommended to use a dedicated virtual environment for the rest of this article.</p> Install GitBash on Windows Run .sh files and install OpenStackClient from a GitBash window under Windows. How to install OpenStackClient GitBash for Windows on 3Engines Cloud. Install and run WSL (Linux under Windows) Run .sh files and install OpenStackClient from a Ubuntu window under Windows. How to install OpenStackClient on Windows using Windows Subsystem for Linux on 3Engines Cloud OpenStack Hosting. Install OpenStackClient on Linux How to install OpenStackClient for Linux on 3Engines Cloud."},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#how-to-download-the-rc-file","title":"How to download the RC file\ud83d\udd17","text":""},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#location-of-the-link-to-rc-file","title":"Location of the link to RC file\ud83d\udd17","text":"<p>Click on account name</p> <p>Top right corner of the Horizon screen contains the account name. Depending on the cloud you are using, you will see a menu like this:</p> WAW3-1, WAW3-2, FRA1-1 ../_images/click_on_email.png <p>Click on API Access</p> <p>Navigate to API Access -&gt; Download OpenStack RC File. Depending on the cloud you are using, you will see a menu like this:</p> WAW3-1, WAW3-2, FRA1-1 ../_images/download_rc_file_2fa.png <p>Option OpenStack clouds.yaml File is out of scope of this article.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#which-openstack-rc-file-to-download","title":"Which OpenStack RC file to download\ud83d\udd17","text":"<p>Choose the appropriate option, depending on the type of account:</p> 2FA not active on the account For clouds WAW3-1, WAW3-2, FRA1-1, select option OpenStack RC File. 2FA active on the account Download file OpenStack RC File (2FA). <p>You only need one copy of the RC file at any time. If you downloaded more than one copy of the file to the same folder without moving or renaming them, your operating system may differentiate amongst the downloaded files by adding additional characters at the end of the file name.</p> <p>By way of example, let the downloaded RC file name be cloud_00734_1-openrc-2fa.sh. For your convenience, you may want to</p> <ul> <li>rename it and</li> <li>move to the folder in which you are going to activate it.</li> </ul>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#the-contents-of-the-downloaded-rc-file","title":"The contents of the downloaded RC file\ud83d\udd17","text":"<p>RC file sets up environment variables which are used by the OpenStack CLI client to authenticate to the cloud. By convention, these variables are in upper case and start with OS_: OS_TENANT_ID, OS_PROJECT_NAME etc. For example, in case of one-factor authentication, the RC file will ask for password and store it into a variable called OS_PASSWORD.</p> <p>Below is an example content of an RC file which does not use 2FA:</p> <p></p> <p>File which supports 2FA will have additional pieces of code for providing the second factor of authentication.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#how-to-activate-the-downloaded-rc-file","title":"How to activate the downloaded RC file\ud83d\udd17","text":"<p>The activation procedure will depend on the operating system you are working with:</p> Ubuntu <p>Assuming you are in the same folder in which the RC file is present, use the source command:</p> <pre><code>source ./cloud_00734_1-openrc-2fa.sh\n</code></pre> macOS <p>The same source command should work on macOS. In some versions of macOS though, an alternative command zsh could serve as well:</p> <pre><code>zsh ./cloud_00734_1-openrc-2fa.sh\n</code></pre> <p>Note that in both cases ./ means \u201cuse the file in this very folder you already are in\u201d.</p> Windows <p>On Windows, to execute file with .sh extension, you must have an installed application that can run Bash files.</p> <p>See Prerequisite No. 3, which describes in more detail how to run .sh files using various scenarios on Windows.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#running-with-one-factor-authentication","title":"Running with one-factor authentication\ud83d\udd17","text":"<p>The activated .sh file will run in a Terminal window (user name is grayed out for privacy reasons):</p> <p></p> <p>Enter the password, either by typing it in or by pasting it in the way your terminal supports it, and press Enter on the keyboard. The password will not be visible on the screen.</p> <p>If your account has only one-factor authentication, this is all you need to do to start running commands from command line.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#two-factor-authentication","title":"Two-factor authentication\ud83d\udd17","text":"<p>If your file supports two-factor authentication, the terminal will first require the password, exactly the same as in case of one-factor authentication. Then you will get a prompt for the second factor, which usually comes in shape of a six-digit one-time password:</p> <p></p> <p>To get the six digit code, run the app that you are using for authentication. As recommended in Prerequisite No. 2, it may be</p> <ul> <li>FreeOTP on mobile,</li> <li>KeePassXC on desktop, or you may run</li> <li>other software of your choice, or you can even write</li> <li>your own Python or Bash code to generate the six digit code.</li> </ul> <p>Let\u2019s say that, for example, you are using FreeOTP on mobile device and that this is the icon you assigned to your account:</p> <p></p> <p>Tap on it and the six-digit number will appear:</p> <p></p> <p>This six-digit number will be regenerated every thirty seconds. Enter the latest number into the Terminal window and press Enter on the keyboard. If everything worked correctly, after a few seconds you should return to your normal command prompt with no additional output:</p> <p></p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#duration-of-life-for-environment-variables-set-by-sourcing-the-rc-file","title":"Duration of life for environment variables set by sourcing the RC file\ud83d\udd17","text":"<p>When you source the file, environment variables are set for your current shell. To prove it, open two terminal windows, source the RC file in one of them but not in the other and you won\u2019t be able to authenticate from that second terminal window.</p> <p>That is why you will need to activate your RC file each time you start a new terminal session. Once authenticated and while that terminal window is open, you can use it to issue OpenStack CLI commands at will.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#testing-the-connection","title":"Testing the connection\ud83d\udd17","text":"<p>If not already, install OpenStack client using one of the links in Prerequisite No 3. To verify access, execute the following command which lists flavors available in 3Engines Cloud cloud:</p> <pre><code>openstack flavor list\n</code></pre> <p>You should get output similar to this:</p> <p></p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#resolving-errors","title":"Resolving errors\ud83d\udd17","text":""},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#jq-not-installed","title":"jq not installed\ud83d\udd17","text":"<p>jq is an app to parse JSON input. In this context, it serves to process the output from the server. It will be installed on most Linux distros. If you do not have it installed on your computer, you may get a message like this:</p> <p></p> <p>To resolve, download from the official support page and follow the directions to install on your operating system.</p> <p>If you are using Git Bash on Windows and running into this error, Step 6 of article on GitBash from Prerequisite 3, has proper instructions for installing jq.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#2fa-accounts-entering-a-wrong-password-andor-six-digit-code","title":"2FA accounts: entering a wrong password and/or six-digit code\ud83d\udd17","text":"<p>If you enter a wrong six-digit code, you will get the following error:</p> <pre><code>Call to Keycloak failed with code 401 and message\n {\n \"error\": \"invalid_grant\",\n \"error_description\": \"Invalid user credentials\"\n}\n</code></pre> <p>If that is the case, simply activate the RC file again as previously and type the correct credentials.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#2fa-accounts-lost-internet-connection","title":"2FA accounts: lost Internet connection\ud83d\udd17","text":"<p>Activating a 2FA RC file requires access to 3Engines Cloud account service because it involves not only setting variables, but also obtaining an appropriate token.</p> <p>If you do not have an Internet connection, you will receive the following output after having entered a six-digit code:</p> <pre><code>Call to Keycloak failed with code 000 and message\n</code></pre> <p>It will be followed by an empty line and you will be returned to your command prompt.</p> <p>To resolve this issue, please connect to the Internet and try to activate the RC file again. If you are certain that you have Internet connection, it could mean that 3Engines Cloud account service is down. If no downtime was announced for it, please contact 3Engines Cloud customer support: Helpdesk and Support</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#non-2fa-accounts-entering-a-wrong-password","title":"Non-2FA accounts: entering a wrong password\ud83d\udd17","text":"<p>If your account does not have two-factor authentication and you entered a wrong password, you will not get an error. However, if you try to execute a command like openstack flavor list, you will get the error similar to this:</p> <pre><code>The request you have made requires authentication. (HTTP 401) (Request-ID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)\n</code></pre> <p>Instead of x characters, you will see a string of characters.</p> <p>To resolve, activate your file again and enter the correct password.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#using-the-wrong-file","title":"Using the wrong file\ud83d\udd17","text":"<p>If you have a 2FA authentication enabled for your account but have tried to activate the non-2FA version of the RC file, executing, say, command openstack flavor list, will give you the following error:</p> <pre><code>Unrecognized schema in response body. (HTTP 401)\n</code></pre> <p>If that is the case, download the correct file if needed and use it.</p>"},{"location":"accountmanagement/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>With the appropriate version of RC file activated, you should be able to create and use</p> <ul> <li>instances,</li> <li>volumes,</li> <li>networks,</li> <li>Kubernetes clusters</li> </ul> <p>and, in general, use all OpenStack CLI commands.</p> <p>For example, if you want to create a new virtual machine, you can follow this article:</p> <p>How to create a VM using the OpenStack CLI client on 3Engines Cloud cloud</p> <p>If you want your new virtual machine to be based on an image which is not available on 3Engines Cloud cloud, you will need to upload it. The following article contains instructions how to do it:</p> <p>How to upload your custom image using OpenStack CLI on 3Engines Cloud</p>"},{"location":"accountmanagement/How-to-buy-credits-using-pay-per-use-wallet-on-3Engines-Cloud.html.html","title":"How to buy credits using Pay Per Use wallet on 3Engines Cloud\ud83d\udd17","text":"<p>In this article you will learn how to use PPU (Pay Per Use) wallet in order to cover expenses of your account at 3Engines Cloud.</p>"},{"location":"accountmanagement/How-to-buy-credits-using-pay-per-use-wallet-on-3Engines-Cloud.html.html#what-are-we-going-to-cover","title":"What Are We Going To Cover\ud83d\udd17","text":"<ul> <li>Check for the correct tax ID or VAT number</li> <li>Select PPU as your way of payment</li> <li>Define how many credits for PPU service</li> <li>Choose payment method</li> <li>Check payment reports</li> </ul>"},{"location":"accountmanagement/How-to-buy-credits-using-pay-per-use-wallet-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://portal.3Engines.com/.</p> <p>No. 2 Have payment details ready</p> <p>You can pay either through bank transfer or through Stripe, in which case you can use your credit cards and other means of online payment. Be sure to have the payment information ready before you start the payment process.</p> <p>Transactions over 10,000 Euros must be made using a bank transfer.</p> <p>You are going to pay with the data you enter for the organization. Be sure that you have the correct tax ID or VAT number and ready to enter, if needed.</p> <p>No. 3 Useful articles</p> <p>As explained in Wallets and Contracts Management, there are three ways of paying for the services on 3Engines Cloud platform:</p> PPUSE (Pay Per Use Wallet) Billing according to the usage. PAYG (Pay As You Go Contract) Tenants are invoiced periodically based on actual usage. FIXED-TERM (Fixed Term Contract) Billing mode where services are bought for longer periods <p>In case you have not entered organization data yet, see article Adding and editing Organization</p>"},{"location":"accountmanagement/How-to-buy-credits-using-pay-per-use-wallet-on-3Engines-Cloud.html.html#step-1-check-for-the-correct-tax-id-or-vat-number","title":"Step 1 Check for the correct tax ID or VAT number\ud83d\udd17","text":"<p>Field Company tax ID / VAT number must be filled in with correct data.</p> <p></p> <p>You can check it by going to: https://portal.3Engines.com/panel/profile/organization</p> <p>Without it, you won\u2019t be able to make an order. An error like this one will appear:</p> <p></p>"},{"location":"accountmanagement/How-to-buy-credits-using-pay-per-use-wallet-on-3Engines-Cloud.html.html#step-2-select-ppu-as-your-way-of-payment","title":"Step 2 Select PPU as your way of payment\ud83d\udd17","text":"<p>On this link, you choose the actual contract type: https://ecommerce.3Engines.com/</p> <p></p> <p>Click on Buy now (assuming you will choose Pay Per Use), otherwise, click on Choose Fixed term to opt for Fixed term payments.</p>"},{"location":"accountmanagement/How-to-buy-credits-using-pay-per-use-wallet-on-3Engines-Cloud.html.html#step-3-define-how-many-credits-for-ppu-service","title":"Step 3 Define how many credits for PPU service\ud83d\udd17","text":"<p>Either by clicking button Buy now or by visiting the following link directly: https://ecommerce.3Engines.com/checkout/pay-per-use/, you will start the process of paying for PPU.</p> <p></p> <p>Let\u2019s say that you want to buy for 250 units, where each unit costs 1 Euro.</p> <p></p> <p>If you have only one wallet, the default wallet will be automatically offered. If you, however, have several wallets, choose the proper one for this order.</p>"},{"location":"accountmanagement/How-to-buy-credits-using-pay-per-use-wallet-on-3Engines-Cloud.html.html#step-4-choose-payment-method","title":"Step 4 Choose payment method\ud83d\udd17","text":"<p>Check whether the information about your organization is correct and proceed to payment.</p> <p></p> <p>There are two different ways of payment:</p> Direct Bank Transfer This method is not instant and will take some time to fund your account. Stripe Stripe is a well established payment processor. It is completely secure and gives you the possibility to fund your account with a variety of payment methods, including credit cards. <p>Again, transactions over 10,000 Euros must be made using a bank transfer.</p> <p>You will see a summary with a new invoice on the bottom of the page.</p> <p>If you chose direct bank transfer, scroll down to the payment section and click Pay:</p> <p></p>"},{"location":"accountmanagement/How-to-buy-credits-using-pay-per-use-wallet-on-3Engines-Cloud.html.html#step-5-check-payment-reports","title":"Step 5 Check payment reports\ud83d\udd17","text":"<p>Check whether the invoice amount matches the actual balance. The invoice in the upper right corner next to the eye icon marked with red line.</p> <p></p> <p>Check on status of the invoice by going to this link: https://ecommerce.3Engines.com/transaction-list/</p> <p></p> <p>Check your wallet as well: https://portal.3Engines.com/panel/orders/pay-per-use</p> <p></p>"},{"location":"accountmanagement/How-to-buy-credits-using-pay-per-use-wallet-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>There are two ways of reaching to us in case of any problems:</p> Dashboard ticket From the browser, use link https://portal.3Engines.com/panel/profile/tickets or click on option Support \u2013&gt; Tickets in the Dashboard. Standard 3Engines Cloud support The link is https://3Engines.com/contact/"},{"location":"accountmanagement/How-to-manage-TOTP-authentication-on-3Engines-Cloud.html.html","title":"How to manage TOTP authentication on 3Engines Cloud\ud83d\udd17","text":"<p>In order to use your 3Engines Cloud account, you need to set a password, and an additional factor of authentication. For the latter, the TOTP algorithm is being used. In this article you will learn how to manage your TOTP configuration.</p>"},{"location":"accountmanagement/How-to-manage-TOTP-authentication-on-3Engines-Cloud.html.html#what-are-we-going-to-cover","title":"What Are We Going To Cover\ud83d\udd17","text":"<ul> <li>Important information about TOTP</li> <li>Entering the TOTP management console</li> <li>Removing the TOTP secret key</li> <li>Adding a new TOTP secret key</li> <li>Contacting customer support</li> </ul>"},{"location":"accountmanagement/How-to-manage-TOTP-authentication-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud account: https://horizon.3Engines.com</p> <p>No. 2 2FA set on your account</p> <p>During account initialization, you will be prompted to configure 2FA TOTP software. You can, for instance, use one of the following articles for that purpose:</p>"},{"location":"accountmanagement/How-to-start-using-dashboard-services-on-3Engines-Cloud.html.html","title":"How to start using dashboard services on 3Engines Cloud\ud83d\udd17","text":"<p>When you try to use 3Engines Cloud dashboard at https://portal.3Engines.com/, you will see an advice on the order of operations to start using the dashboard properly.</p> <p></p>"},{"location":"accountmanagement/How-to-start-using-dashboard-services-on-3Engines-Cloud.html.html#step-1-set-up-the-organization","title":"Step 1 Set up the organization\ud83d\udd17","text":"<ol> <li>Go to the organization, add it by providing the name, details and a valid EU VAT number/TAX ID assigned to your country.</li> </ol> <p>The option to use is Configuration -&gt; Organization.</p> <p></p> <p>See article Adding and editing Organization.</p>"},{"location":"accountmanagement/How-to-start-using-dashboard-services-on-3Engines-Cloud.html.html#step-2-enable-payment-options","title":"Step 2 Enable payment options\ud83d\udd17","text":"<p>Go to the eCommerce site and top up your wallet with the required funds.</p> <p></p> <p>See article How to buy credits using Pay Per Use wallet on 3Engines Cloud.</p>"},{"location":"accountmanagement/How-to-start-using-dashboard-services-on-3Engines-Cloud.html.html#step-3-activate-the-project","title":"Step 3 Activate the project\ud83d\udd17","text":"<p>Go to \u201cCloud projects\u201d and activate the project in the cloud/region you are interested in. The options to choose are Billing and Reporting -&gt; Cloud projects/Wallets.</p> <p></p> <p>At the moment of this writing, there were four different regions to choose from: WAW3-1, WAW3-2, WAW4-1, FRA1-2. These regions are actually clouds running under OpenStack and in each you can have your own virtual machines, access to EO data, create Kubernetes clusters and so on. Although all clouds are running under OpenStack, there are differences in available software, hardware, resources and so on, so it pays to learn which cloud is best for you.</p> <p>You may want to work with all these clouds at the same time, maybe with different groups of people working on different projects and so on.</p> <p>It is up to you to activate all these clouds at once\u2026 or just one\u2026 or anything in between. The regions/clouds you activate in the dashboard can be seen in the Horizon dashboard, in the menu.</p>"},{"location":"accountmanagement/How-to-start-using-dashboard-services-on-3Engines-Cloud.html.html#step-4-start-using-the-chosen-cloud-in-horizon","title":"Step 4 Start using the chosen cloud in Horizon\ud83d\udd17","text":"<p>To start using the services, choose proper Cloud Panel from the Management Interfaces.</p> <p></p> <p>It will lead you to page https://horizon.3Engines.com:</p> <p></p> <p>Let\u2019s say we want to work with cloud WAW3-1.</p> <p></p> <p>Click on Sign In and the Horizon will show up. Horizon will remember which project and cloud were active previously and will return to them automatically. If you want to work with another cloud, select it manually.</p> <p></p>"},{"location":"accountmanagement/Inviting-New-User.html.html","title":"Inviting new user to your Organization\ud83d\udd17","text":"<p>Important</p> <p>One user can only be assigned to one organization at a time.</p> <p>To invite a new user to your organization you need to share an invitation code.</p> <p>After logging into https://portal.3Engines.com/ press Invitations button on the left bar menu.</p> <p></p> <p>Now you can copy an invitation code by clicking on Copy to clipboard button and send it to a new user by email.</p> <p>After receiving the code, the user will join the organization by</p> <ul> <li>clicking Join an Organization in Organization tab and</li> <li>pasting the invitation code.</li> </ul> <p>As an organization admin, you need to accept the invitation first.</p> <p>Go to the Invitations tab and choose an invitation that you want to accept or \u2013 in other case \u2013 reject.</p> <p></p> <p>After accepting the invitation you will be able to add/edit roles. For more details please check Tenant manager users and roles on 3Engines Cloud.</p>"},{"location":"accountmanagement/Privacy-Policy.html.html","title":"Privacy policy for clients\ud83d\udd17","text":"<p>If you are not redirected, click here.</p>"},{"location":"accountmanagement/Registration-And-Account.html.html","title":"Registration and Setting up an Account\ud83d\udd17","text":"<p>Go to the https://portal.3Engines.com/ site and press CREATE ACCOUNT button.</p> <p></p> <p>Fill up all fields marked with * including accepting mandatory terms and conditions and press Create Account button.</p> <p>Please note that marketing consents are not mandatory and can be changed at any time.</p> <p></p> <p>Once you create account below screen will appear. Please check your mail box and verify mail. After that you will be able to log in.</p> <p></p> <p>For general information about types of account and user roles you may have in Dashboard, see Tenant manager users and roles on 3Engines Cloud</p> <p>After creating personal account you can either create new company account or join an existing account. See articles:</p> <p>Adding and editing Organization</p> <p>Inviting new user to your Organization</p> <p>If you are a single user you can only access a limited number of services.</p> <p>See article How to start using dashboard services on 3Engines Cloud</p>"},{"location":"accountmanagement/Removing-User-From-Organization.html.html","title":"Removing user from Organization\ud83d\udd17","text":"<p>After logging into https://portal.3Engines.com/ press Sub-accounts button on the left bar menu to check the list of members of your Organization.</p> <p></p> <p>Select user that you want to be removed and press Unassign button on the right side and after that press Confirm button.</p> <p>User will have received notification about removing from your Organization.</p>"},{"location":"accountmanagement/Services.html.html","title":"Services\ud83d\udd17","text":"<p>After logging into https://portal.3Engines.com/ press Active services button on the left bar menu.</p> <p></p> <p>In this tab you are able to filter your services by Project or by Product.</p> <p>You can also check what type of contract or billing mode is assigned to your services. For more details please visit /accountmanagement/Accounts-and-Projects-Management.</p>"},{"location":"accountmanagement/Services.html.html#how-to-change-assigned-contract","title":"How to change assigned contract\ud83d\udd17","text":"<p>PAY PER USE - user can assign wallet to specific project in the Accounts tab</p> <p>PAY AS YOU GO - user can assign wallet to specific project in the Accounts tab</p> <p>FIXED TERM - is assigned by 3Engines Support Team during the contract creation</p> <p>Please note that PPU/PAYG assignment status is visible in the Accounts tab.</p>"},{"location":"accountmanagement/Tenant-Manager-Users-And-Roles-On-3Engines-Cloud.html.html","title":"Tenant manager users and roles on 3Engines Cloudenant manager users and roles on 3Engines Cloud\ud83d\udd17","text":""},{"location":"accountmanagement/Tenant-Manager-Users-And-Roles-On-3Engines-Cloud.html.html#differences-between-openstack-user-roles-and-tenant-managers-roles","title":"Differences between OpenStack User Roles and Tenant Manager\u2019s Roles\ud83d\udd17","text":"<p>An OpenStack role is a personality that a user assumes to perform a specific set of operations. A role includes a set of rights and privileges. A user assuming that role inherits those rights and privileges. OpenStack roles are defined for each user and each project independently.</p> <p>A Tenant Manager role, on the other hand, defines whether a user should have the ability to manage an organization via the Tenant Manager or have access to OpenStack.</p>"},{"location":"accountmanagement/Tenant-Manager-Users-And-Roles-On-3Engines-Cloud.html.html#what-are-we-going-to-cover","title":"What Are We Going To Cover\ud83d\udd17","text":"<ul> <li>The difference between User Roles and Tenant Manager Role</li> <li>List three basic roles an organization administrator you can assign</li> <li>Show how to add a member+ role, which can have access to OpenStack and be used for managing projects</li> </ul>"},{"location":"accountmanagement/Tenant-Manager-Users-And-Roles-On-3Engines-Cloud.html.html#users-and-roles-in-the-tenant-manager","title":"Users and Roles in the Tenant Manager\ud83d\udd17","text":"<p>After logging into https://portal.3Engines.com/ click on the Sub-accounts button on the left bar menu.</p> <p></p> <p>Here you are able to:</p> <ul> <li>Check your organizations\u2019 list of users and their roles</li> <li>Remove users from or add them to your organization (admin role)</li> </ul> <p>As an organization administrator you can assign one of the following roles to a user:</p> <ul> <li>admin - user with highest privileges, can manage whole organizations and has access to OpenStack.</li> <li>member - default user with basic privileges.</li> <li>member+ - the same as member but has OpenStack access and can manage projects.</li> </ul>"},{"location":"accountmanagement/Tenant-Manager-Users-And-Roles-On-3Engines-Cloud.html.html#adding-member-user-to-your-project-in-openstack-using-horizon-interface","title":"Adding member+ user to your project in OpenStack using Horizon interface\ud83d\udd17","text":"<p>Users with the role of member+ have access to OpenStack and can be enabled to manage your organization projects. They cannot however, manage the organization itself.</p> <p>To add a member+ user to the project, follow these steps:</p> <p>1. Check if your user has a member+ role in Tenant Manager.</p> <p>2. Log into https://horizon.3Engines.com as an admin.</p> <p>3. Select Identity \u2192 Projects</p> <p></p> <p>4. Select the project you want to add a user to and select Manage members</p> <p></p> <p>5. Add the desired user(s) to the project by clicking on the \u201c+\u201d button next to them.</p> <p></p> <p>6. Choose a suitable project role for the user and confirm by clicking Save in the lower-right corner.</p> <p></p> <p>7. Next time the user will log into https://horizon.3Engines.com OpenStack Horizon, the suitable access to the project will be granted.</p>"},{"location":"accountmanagement/Tenant-Manager-Users-And-Roles-On-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>The article Inviting new user to your Organization shows how to invite a new user.</p> <p>To the contrary, article Removing user from Organization shows how to remove a user from the organization.</p> <p>The article /accountmanagement/Accounts-and-Projects-Management is a general guidance to creating and managing accounts and projects on 3Engines Cloud.</p>"},{"location":"accountmanagement/Two-Factor-Authentication-for-3Engines-Cloud-Site.html.html","title":"Two-Factor Authentication to 3Engines Cloud site using mobile application\ud83d\udd17","text":"<p>Warning</p> <p>Two-Factor Authentication is required for all 3Engines Cloud users. The only exception are accounts which log in using Keystone credentials.</p> <p>Traditionally, the most basic way to implement security online was to authenticate users and companies with a pair of usernames/passwords. Most usernames are email addresses and if email address is breached, the bad actor can probably learn your password too. What once used to be secure enough is not secure now because of easy access to refined brute force methods, availability of computing power at scale, social engineering methods, identity theft and so on.</p> <p>The way to overcome this limitation is to introduce two or more factors or types of user authentication. These could be</p> <ul> <li>something the user knows (email address, the name of their first pet etc.)</li> <li>something the user has (token generator, smartphone, credit card etc.) or</li> <li>biometric information such as fingerprint, iris, retina, voice, face and so on.</li> </ul> <p>Logging into the 3Engines Cloud site uses two-factor authentication, meaning you will have to supply two independent types of data:</p> <ul> <li>the \u201cclassical\u201d username and password, as well as</li> <li>the numeric code supplied by a concrete mobile app.</li> </ul> <p>This article is about using mobile devices to authenticate to the cloud. If you want to use your computer to do that, see Two-Factor Authentication to 3Engines Cloud site using KeePassXC on desktop.</p> <p>You will first have to install one of the following two mobile applications, for Android or iOS mobile operating systems:</p> <ul> <li>FreeOTP, where OTP stands for One Time Password, or</li> <li>Google Authenticator.</li> </ul> <p>We can use \u201cmobile authenticator\u201d as a generic term for a mobile app that can help authenticate with the account.</p>"},{"location":"accountmanagement/Two-Factor-Authentication-for-3Engines-Cloud-Site.html.html#which-one-to-use-freeotp-or-google-authenticator","title":"Which One to Use \u2013 FreeOTP or Google Authenticator?\ud83d\udd17","text":"<p>You can use FreeOTP with Google accounts instead of Google Authenticator app.</p> <p>If you already use Google Authenticator app for other accounts, you may prefer it over FreeOTP.</p> <p>Warning</p> <p>If your accounts are protected by Google Authenticator and it stops working, then you risk losing all the data that were behind those protected accounts. The most common scenario is to switch to a new phone number and then not be able to verify the accounts via a text message to the previous phone number.</p> <p>In this tutorial, you are going to use the FreeOTP app.</p> <p>Warning</p> <p>If you lose access to QR codes and cannot log into the Horizon site for 3Engines Cloud, ask Support service to help you by sending email to the following address support@3Engines.com.</p>"},{"location":"accountmanagement/Two-Factor-Authentication-for-3Engines-Cloud-Site.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>How to start using the mobile authenticator</li> <li>How to locate, download and install FreeOTP app on your mobile device</li> <li>How to set up FreeOTP app and connect it to your 3Engines Cloud account</li> <li>How to get new code each time you want to enter the site</li> </ul>"},{"location":"accountmanagement/Two-Factor-Authentication-for-3Engines-Cloud-Site.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>Use only one of the four possible combinations for two apps and two app stores.</p> <p>No. 1 FreeOTP app in Google Play Store</p> <p>Download FreeOTP app in Google Play Store using this link.</p> <p>No. 2 FreeOTP app in iOS App Store</p> <p>Download FreeOTP app in iOS App Store using this link.</p> <p>No. 3 Google Authenticator in Google Play Store</p> <p>Download Google Authenticator in Google Play Store using this link.</p> <p>No. 4 Google Authenticator in iOS App Store</p> <p>Download Google Authenticator in iOS App Store using this link.</p> <p>Warning</p> <p>You should install the authenticator app before trying to log into the 3Engines Cloud site.</p> <p>You are now going to download, install and use the FreeOTP app to authenticate to 3Engines Cloud site.</p>"},{"location":"accountmanagement/Two-Factor-Authentication-for-3Engines-Cloud-Site.html.html#step-1-download-and-install-freeotp-from-the-app-store","title":"Step 1 Download and Install FreeOTP from the App Store\ud83d\udd17","text":"<p>Using the App Store icon from the desktop of your iOS device, locate app called freeotp. A screen like this will appear:</p> <p></p> <p>Tap on GET and the app will start downloading to your device.</p> <p></p> <p>It may take a minute or so and then install it by tapping on button Install.</p> <p></p> <p>Once installed, type on Open and the app will run. At first, there will be no tokens to work with:</p> <p></p> <p>Note</p> <p>FreeOTP can also use tokens to secure access to the remote site. The 3Engines Cloud site uses QR code, so that is what you will use in this tutorial. (Both \u201ctoken\u201d and \u201cQR scan\u201d denote a secure connection to the site, but use different techniques in the process.)</p>"},{"location":"accountmanagement/Two-Factor-Authentication-for-3Engines-Cloud-Site.html.html#step-2-scan-qr-and-create-brand","title":"Step 2 Scan QR and Create Brand\ud83d\udd17","text":"<p>Select a brand, which means select an icon that will make your tokens stand out graphically. If you will employ this app only to get access to 3Engines Cloud, you may select whichever icon you want.</p> <p></p> <p>In the next step, you may require that the phone is unlocked when the token is to be activated. Choose that if you are afraid someone might steal your phone and get access to your 3Engines Cloud data that way.</p> <p></p> <p>Clicking on information icon will show you legal details about this app.</p> <p></p> <p>To scan the QR code, use a QR like icon in the upper part of the screen, like this:</p> <p></p> <p>Click on it to get to the scanner part of the application and read the QR code from the login screen.</p> <p>Note</p> <p>The QR code will appear on screen when you first try to log into the 3Engines Cloud site (see below).</p> <p></p>"},{"location":"accountmanagement/Two-Factor-Authentication-for-3Engines-Cloud-Site.html.html#step-3-create-a-six-digit-code-to-enter-into-the-login-screen","title":"Step 3 Create a Six-digit Code to Enter Into the Login Screen\ud83d\udd17","text":"<p>Finally, you will see a row within the FreeOTP app, with the icon you chose and with the code that will appear automatically. For instance, the code is 289582 and that is the code that you need to enter when the site asks you for One-time code.</p> <p></p> <p>If you created several tokens or repeatedly scanned QR code from the screen, you may see the appropriate number of rows on the mobile screen:</p> <p></p> <p>Tapping on any of these will produce the six-digit code that you have to type into the entry form to get logged in. Only one of these will be the right one, in this case, the first row produces the correct six-digits code for 3Engines Cloud site.</p> <p></p> <p>You are now ready to log into the 3Engines Cloud site using the two-factor authentication.</p>"},{"location":"accountmanagement/Two-Factor-Authentication-for-3Engines-Cloud-Site.html.html#how-to-start-using-the-mobile-authenticator-with-your-account","title":"How to Start Using the Mobile Authenticator With Your Account\ud83d\udd17","text":"<p>Use the usual link https://horizon.3Engines.com to log into your 3Engines Cloud account and choose 3Engines Cloud in the input menu.</p> <p></p> <p>Click on blue button Sign In and enter your username / email and password:</p> <p></p> <p>If the data you entered has not already been linked to two-factor authentication, the next screen will be Mobile Authenticator Setup:</p> <p></p> <p>This screen will contain the QR code that you have to read from using the mobile authenticator app, in this case, the FreeOTP app.</p> <p>At this moment, start using the mobile device \u2013 activate the FreeOTP first if not already active, scan the QR code with the QR icon and, as explained above, get the six-digit code on the mobile device screen.</p> <p>Retype that six-digit code into the One-time code field on computer screen. It is denoted by an asterisk, meaning that it is mandatory to enter a value into this field.</p> <p>You can use the field Device Name to remind yourself on which device was the mobile authenticator app installed on.</p> <p>Click on Submit and you will be brought back to the Sign in screen from the beginning:</p>"},{"location":"accountmanagement/Two-Factor-Authentication-for-3Engines-Cloud-Site.html.html#logging-into-the-site-once-the-two-factor-authentication-is-installed","title":"Logging Into the Site Once the Two-Factor Authentication is Installed\ud83d\udd17","text":"<p>Here is the workflow in one place, with all of the screens repeated for easy reference.</p> <p>Use the usual link https://horizon.3Engines.com to log into your 3Engines Cloud account and choose 3Engines Cloud in the input menu.</p> <p></p> <p>Click on blue button Sign In and enter your username / email and password:</p> <p></p> <p>Since the two-factor authentication is already installed, you will only see the window to enter the six-digit code.</p> <p></p> <p>Now activate the mobile authenticator app and get the code on the device screen, for instance, like this:</p> <p></p> <p>In this case, the code is 828966. Enter it into the form, Submit and you will be logged in.</p> <p></p> <p>Note</p> <p>If the FreeOTP app is in the foreground on the mobile device while you are submitting the username and password, the app will react automatically and the proper six-digit code will appear on its own on the authenticator device.</p>"},{"location":"accountmanagement/Two-Factor-Authentication-for-3Engines-Cloud-Site.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>As mentioned in the beginning, you can use your computer for two-factor authentication \u2013 see article Two-Factor Authentication to 3Engines Cloud site using KeePassXC on desktop.</p> <p>Either using mobile device or computer to authenticate, you will be logged into Horizon. You will then need to activate access to 3Engines Cloud cloud API functions and be able to run openstack command. Please see article How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication.</p> <p>To learn how to manage your TOTP secret key, visit the following article: How to manage TOTP authentication on 3Engines Cloud - it can be useful if you, for instance, want to use a different method of authentication, are unable to extract your secret key from currently used piece of software such as FreeOTP and do not have your secret key backed up in a readable way.</p>"},{"location":"accountmanagement/Using-KeePassXC-for-Two-Factor-Authentication-on-3Engines-Cloud.html.html","title":"Two-Factor Authentication to 3Engines Cloud site using KeePassXC on desktop\ud83d\udd17","text":"<p>Please see article Two-Factor Authentication to 3Engines Cloud site using mobile application if you want to use a smartphone app for the TOTP two-factor authentication.</p> <p>If you, however, want to use your desktop or laptop computer instead, KeePassXC is probably a good choice for you. It is a free and open source graphical password manager. It stores passwords, TOTP keys and other secrets in a file on your computer. You can later, for example, move that file manually to a different computer to use that device instead of the current one.</p> <p>Contrary to software such as BitWarden, 1Password or LastPass, KeePassXC does not have any cloud sync features.</p> <p>Attention</p> <p>Since KeePassXC does not provide any cloud storage, you need to make sure that you do not lose your file and whatever is required to decrypt it. You will lose all the content of the file if you lose any of these objects. The backup of this file should be performed.</p> <p>If you already have KeePassXC installed and configured, skip to Step 3 Adding Entry or 4 Configuring the TOTP.</p> <p>The following instructions are for Ubuntu. If you use a different operating system, please refer to the appropriate documentation.</p>"},{"location":"accountmanagement/Using-KeePassXC-for-Two-Factor-Authentication-on-3Engines-Cloud.html.html#step-1-install-keepassxc","title":"Step 1 Install KeePassXC\ud83d\udd17","text":"<p>Install KeePassXC before logging in to the 3Engines Cloud website. Open the terminal, type the following command and press Enter:</p> <pre><code>sudo apt update &amp;&amp; sudo apt upgrade -y &amp;&amp; sudo apt install -y keepassxc\n</code></pre>"},{"location":"accountmanagement/Using-KeePassXC-for-Two-Factor-Authentication-on-3Engines-Cloud.html.html#step-2-configure-keepassxc","title":"Step 2 Configure KeePassXC\ud83d\udd17","text":"<p>Launch KeePassXC. During its first run, you will see the following window:</p> <p></p> <p>Click the button Create new database it in order to create a file in which you can store your passwords, TOTP keys and other secrets. Now you will see the following window:</p> <p></p> <p>In the first step of database creation you may provide its name and discription. The name provided here will not be the name of your file, so you may leave it as it is. Click Continue. The following window will appear:</p> <p></p> <p>Next, you may choose how long should the decryption of your database take. However, please keep in mind that, as it is written in that window, Higher values offer more protection, but opening the database will take longer. Leave the default database format and click Continue. You will now see the following window:</p> <p></p> <p>Now you need to provide the password for decrypting your database. Enter it again in the second text field. You can also add additional security measures using the button Add additional protection\u2026, but if you are just getting started in might not be needed.</p> <p>Attention</p> <p>If at any point in the future you are unable to provide your password (for example, because you have forgotten it) and any additional protection measures you configured, you will be locked out of your database and potentially lose all of its content.</p> <p>Click Done.</p> <p>Choose the name for the file containing your secrets and its location. Click Save.</p>"},{"location":"accountmanagement/Using-KeePassXC-for-Two-Factor-Authentication-on-3Engines-Cloud.html.html#step-3-add-the-entry-for-your-account","title":"Step 3 Add the entry for your account\ud83d\udd17","text":"<p>Your database should now be operational. Let\u2019s create the entry containing your username, password and TOTP for the 3Engines Cloud cloud. Click Add a new entry (the fourth button on the toolbar, marked with the red rectangle on the screenshot below.</p> <p></p> <p>The following window will appear:</p> <p></p> <p>In the Title field enter the name under which your entry should be identified in your database, for example 3Engines Cloud. Then, type your username and password.</p> <p>Click OK to save the entry.</p> <p>If the option Automatically save after every change in the General section of the application settings is enabled, you do not have to save. If not, press CTRL+S to save the database.</p>"},{"location":"accountmanagement/Using-KeePassXC-for-Two-Factor-Authentication-on-3Engines-Cloud.html.html#step-4-configure-totp","title":"Step 4 Configure TOTP\ud83d\udd17","text":"<p>Now we need to obtain your TOTP key.</p>"},{"location":"accountmanagement/Using-KeePassXC-for-Two-Factor-Authentication-on-3Engines-Cloud.html.html#method-1-during-account-creation","title":"Method 1: During account creation\ud83d\udd17","text":"<p>After having created an account on https://horizon.3Engines.com but before first login, you will receive the Mobile Authenticator Setup prompt, as in the following image:</p> <p></p> <p>Since you are using a computer which cannot act as a mobile device, click Unable to scan?. The QR code will now be replaced with your key:</p> <p></p> <p>Copy the code with which the QR code has just been replaced.</p> <p>Once you have your TOTP key,</p> <ul> <li>return to KeePassXC,</li> <li>right-click the entry for your account and</li> <li>choose the TOTP\u2026 -&gt; Setup TOTP\u2026 option.</li> </ul> <p>You will see the following window:</p> <p></p> <p>Paste your key there into the text field Key: and keep the checkbox Default RFC 6238 token settings checked. Click OK.</p> <p>In order to view your code, right-click the entry and select TOTP\u2026 &gt; Show TOTP\u2026. It is easier, however, to simply</p> <ul> <li>left-click that entry and</li> <li>press CTRL+Shift+T.</li> </ul> <p>You can also press CTRL+T while your entry is highlighted to copy your TOTP code to your clipboard (remember that depending on settings it will disappear from your clipboard, so make sure that you paste it in time).</p> <p>The window with the code will look like this:</p> <p></p> <p>Type your 6-digit code from the above window to the text field One-time-code on the 3Engines Cloud website and choose how you would like to call your device containing the TOTP key. Please make sure that you do it before that key expires. If the key expires, you will get another one and you should type it instead. Click Submit. You should now be able to proceed with your login process.</p>"},{"location":"accountmanagement/Using-KeePassXC-for-Two-Factor-Authentication-on-3Engines-Cloud.html.html#method-2-after-another-method-of-totp-has-already-been-configured","title":"Method 2: After another method of TOTP has already been configured\ud83d\udd17","text":"<p>If the method of TOTP authentication you are currently using allows you to extract the secret key(or you have it backed up somewhere), you should be able to use that same secret key which you are currently using for KeePassXC as well.</p> <p>If no other options remain, contact 3Engines Cloud customer support for assistance.</p> <p>Either way, eventually you should get your secret key. Enter it in KeePassXC the same way as explained in Method 1 above - to the Key: text field. If that secret key is already added and configured for your account, no further action should be necessary. If not and you are in the process of configuring it, paste the 6-digit TOTP code from KeePassXC in the same way as you entered the code from your other device during account setup.</p>"},{"location":"accountmanagement/Using-KeePassXC-for-Two-Factor-Authentication-on-3Engines-Cloud.html.html#step-5-login-using-totp","title":"Step 5 Login using TOTP\ud83d\udd17","text":"<p>Each time you login, type your credentials normally. After that you will see the following text field:</p> <p></p> <p>Generate your TOTP code as explained before (left-click the appropriate entry in KeePassXC and press CTRL+Shift+T) and type that code in the text field One-time code in your browser. If you want to simply copy your code to your clipboard, press CTRL+T while your entry is highlighted (remember that depending on settings it will disappear from your clipboard, so make sure that you paste it in time). Each code lasts only 30 seconds, so if you only have a few seconds remaining on your current code, you might want to wait until the new one is generated. Now you should be signed in.</p>"},{"location":"accountmanagement/Using-KeePassXC-for-Two-Factor-Authentication-on-3Engines-Cloud.html.html#additional-information","title":"Additional information\ud83d\udd17","text":"<p>You can find additional information about using KeePassXC in its official documentation.</p>"},{"location":"accountmanagement/accountmanagement.html.html","title":"ACCOUNT MANAGEMENT","text":"<ul> <li>Registration and Setting up an Account</li> <li>How to start using dashboard services on 3Engines Cloud</li> <li>Two-Factor Authentication to 3Engines Cloud site using mobile application</li> <li>Two-Factor Authentication to 3Engines Cloud site using KeePassXC on desktop</li> <li>How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication</li> <li>How to manage TOTP authentication on 3Engines Cloud</li> <li>Adding and editing Organization</li> <li>How to buy credits using Pay Per Use wallet on 3Engines Cloud</li> <li>Forgotten Password</li> <li>Editing profile</li> <li>Wallets and Contracts Management</li> <li>Services</li> <li>Inviting new user to your Organization</li> <li>Removing user from Organization</li> <li>Tenant manager users and roles on 3Engines Cloud</li> <li>Helpdesk and Support</li> <li>Privacy policy for clients</li> <li>Cookie consent on 3Engines Cloud</li> </ul>"},{"location":"cloud/Block-storage-and-object-storage-performance-limits-on-3Engines-Cloud.html.html","title":"Block storage and object storage performance limits on 3Engines Cloud\ud83d\udd17","text":"<p>On 3Engines Cloud, there are performance limits for HDD, NVMe (SSD), and Object Storage to ensure stable operation and protect against accidental DDoS attacks.</p>"},{"location":"cloud/Block-storage-and-object-storage-performance-limits-on-3Engines-Cloud.html.html#current-limits","title":"Current limits\ud83d\udd17","text":"Block HDD 500 IOPS (read and write) Block SSD/NVMe <p>3000 IOPS (read and write)</p> <p>NOTE: On 3Engines Cloud, all SSD storage is NVMe-based.</p> S3 Object Storage (General Tier) <p>2000 operations per second with a</p> <ul> <li>2500 burst limit per bucket and</li> <li>150 MB/s transfer per request</li> </ul> <p>In majority of cases, the actual throughput may be larger because:</p> <ul> <li>Typical downloads use multiple requests and</li> <li>More than 99.95% of requests stay below 100 MB/s.</li> </ul> <p>Again, this limit primarily helps mitigate accidental DDoS scenarios.</p>"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html","title":"DNS as a Service on 3Engines Cloud Hosting\ud83d\udd17","text":"<p>DNS as a Service (DNSaaS) provides functionality of managing configuration of user\u2019s domains. Managing configuration means that the user is capable of creating, updating and deleting the following DNS records:</p> Type Description A Address record AAA IPv6 address record CNAME Canonical name record MX Mail exchange record PTR Pointer record SPR Sender Policy Framework SRV Service locator SSHFP SSH Public Key Fingerprint TXT Text record <p>DNS configuration management is available via OpenStack web dashboard (Horizon), OpenStack command line interface as well as via the API.</p> <p>DNS records management is performed on the level of an OpenStack project.</p> <p>Since DNSaaS purpose is to deal with external domain names, the internal name resolution (name resolution for private IP addresses within user\u2019s projects) is not covered by this documentation.</p>"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Domain delegation in registrar\u2019s system</li> <li>Domain configuration through Zone configuration</li> <li>Checking the presence of the domain on the Internet</li> <li>Adding new record for the domain</li> <li>Adding records for subdomains</li> <li>Managing records</li> <li>Limitations in OpenStack DNSaaS</li> </ul>"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Must have access to a project in 3Engines Cloud OpenStack account</p> <p>If you are a tenant manager, you will be able to either use the existing basic project or create new projects for yourself or your users.</p> <p>If you are a user of the account, the tenant manager will have already created a project for you.</p> <p>No. 3 Basic knowledge of DNS notions and principles</p> <p>We assume you already have a</p> <ul> <li>basic knowledge of Domain Name Service principles as well as</li> <li>understanding of the purpose of DNS records.</li> </ul> <p>If not, please see DNS article on Wikipedia or OpenStack DNSaaS command line reference</p> <p>No. 4 Must have domain purchased from a registrar</p> <p>You also must own a domain purchased from any registrar (domain reseller). Obtaining a domain from registrars is not covered in this article.</p> <p>No. 5 Must have a Linux server with an assigned IP address</p> <p>To verify DNS creation and propagation, you shall use the dig command from Linux. You will also need an IP address to point the domain name to. You may have already created one such VM in your 3Engines Cloud server and if not, here is how to create a virtual machine, assign a floating IP to it and access it from Windows desktop computer:</p> <p>How to create a Linux VM and access it from Windows desktop on 3Engines Cloud</p> <p>Or, you might connect from a Linux based computer to the cloud:</p> <p>How to create a Linux VM and access it from Linux command line on 3Engines Cloud</p> <p>In both cases, the article will contain a section to connect floating IP to the newly created VM. The generated IP address will vary, but for the sake of concreteness we shall assume that it is 64.225.133.254. You will enter that value later in this article, to create record set for the site or service you are making.</p>"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html#step-1-delegate-domain-to-your-registrars-system","title":"Step 1 Delegate domain to your registrar\u2019s system\ud83d\udd17","text":"<p>The configuration of domain name in your registrar\u2019s system must point to the NS records of 3Engines name servers. It can be achieved in two ways:</p> <p>Option 1 - Use 3Engines name servers (recommended)</p> <p>Configure NS records for your domain to the following 3Engines name servers:</p> Purpose Name Server IP primary name server cloud-dns1.3Engines.com 91.212.141.94 secondary name server cloud-dns2.3Engines.com 91.212.141.102 secondary name server cloud-dns3.3Engines.com 91.212.141.86 <p>Option 2 - Set up your own glue records (not recommended)</p> <p>Warning</p> <p>This configuration option may be not supported by some registrars.</p> <p>Configure glue records for your domain, so that they point to the following IP addresses:</p> Purpose Name Server IP primary name server ns1.exampledomain.com 91.212.141.94 secondary name server ns2.exampledomain.com 91.212.141.102 secondary name server ns3.exampledomain.com 91.212.141.86"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html#step-2-zone-configuration","title":"Step 2 Zone configuration\ud83d\udd17","text":"<p>Zone configuration is defining parameters for the main domain name you have purchased.</p> <p>To manage domain exampledomain.com in OpenStack, login to OpenStack dashboard, choose the right project if different than default, go to Project \u2192 DNS \u2192 Zones, click Create Zone and fill in the required fields:</p> <p></p> <p>Here is what the parameters mean:</p> <ul> <li>Name: your domain name</li> <li>Description: free text description</li> <li>Email Address: an administrative e-mail address associated with the domain</li> <li>TTL: Time To Live in seconds - a period of time between refreshing cache in DNS servers. Please note that the longer time, the faster will be name recognition for your domain by external DNS servers but also if you introduce changes, they will propagate slower. The default value of 3600 seconds is a reasonable compromise.</li> <li>Type: You may choose if OpenStack name servers will be primary or secondary for your domain. Default: Primary. In case you want to setup secondary name servers, you just define IP addresses or master DNS servers for the domain.</li> </ul> <p>After submitting, your domain should be served by OpenStack.</p>"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html#step-3-checking-the-presence-of-the-domain-on-the-internet","title":"Step 3 Checking the presence of the domain on the Internet\ud83d\udd17","text":"<p>It usually takes from 24 up to 48 hours for the domain name to propagate through the Internet so it will not be available right away. Rarely, domain name starts resolving in matters of minutes and hours instead of days, so it pays to try the domain address in your browser an hour or two after configuring the zone for the domain.</p> <p>There are several ways of checking whether the domain name has propagated.</p> Domain name in the browser <p>The most natural way of checking is to enter the domain name into the browser. If you get a message that the site cannot be found, you will have to wait longer.</p> <p>Browsers, in general, do not provide messages that pinpoint to the lack of propagation as the source of error. Be sure to check in the browser again after you add records to the zone (see below).</p> Check with Linux dig command <p>The dig command has several parameters. The following combination will show the presence of the name servers in the global DNS system:</p> <pre><code>dig -t any +noall +answer exampledomain.com @cloud-dns1.3Engines.com\nexampledomain.com. 3600 IN SOA cloud-dns2.3Engines.com. [email\u00a0protected]. 1675003306 3588 600 86400 3600\nexampledomain.com. 3600 IN NS cloud-dns1.3Engines.com.\nexampledomain.com. 3600 IN NS cloud-dns3.3Engines.com.\nexampledomain.com. 3600 IN NS cloud-dns2.3Engines.com.\n</code></pre> Check with Linux curl command The curl command will transfer data from one domain address to the host on which it is running. Here is what the output would look like for the domain name that does not exist: <pre><code>curl someinvaliddomain.com\ncurl: (6) Could not resolve host: someinvaliddomain.com\n</code></pre> <p>If the site responds via HTML that means the domain was resolved:</p> <pre><code>curl exampledomain.com\n&lt;!DOCTYPE html&gt;\n&lt;html&gt;\n&lt;head&gt;\n ...\n</code></pre> Check with sites that specialize in DNS configuration tracking <p>There are sites that will show on the map of the world whether the chosen servers on the Internet know about the domain name or not. Search in the search engine of your choice for a key phrase such as \u201cDNS checker propagation\u201d, choose a site and enter the domain name.</p> <p>Specify A to see the propagation of the domain itself and specify NS to see the propagation of nameservers across the Internet.</p>"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html#step-4-adding-new-record-for-the-domain","title":"Step 4 Adding new record for the domain\ud83d\udd17","text":"<p>To add a new record to the domain, click on Create Record Set next to the domain name and fill in the required fields. The most important entry is to connect the domain name to the IP address you have. To configure an address of web server in exampledomain.com, so that it is resolved to 64.225.133.254 which is a Floating IP address of your server, fill the form as follows:</p> <p></p> <p>The parameters are:</p> <ul> <li>Type: Type of record (for example A, MX, etc.)</li> <li>Name: name of the record (for example www.exampledomain.com, mail.exampledomain.com, \u2026)</li> <li>Description: free text description</li> <li>TTL: Time To Live in seconds - a period of time between refreshing cache in DNS serves.</li> <li>Records: Desired record value (there may be more than one - one per line):</li> </ul> <ul> <li>for records of Type A put IP address</li> <li>for records of Type MX put name of a mail server which hosts e-mails for the domain</li> <li>for records of Type CNAME put original name which is to be aliased</li> </ul> <p>Submit the form and check whether your configuration works:</p> <pre><code>dig -t any +noall +answer exampledomain.com @cloud-dns1.3Engines.com\nexampledomain.com. 3600 IN SOA cloud-dns2.3Engines.com. XXXXXXXXX.YYYYYYYY.com. 1675325538 3530 600 86400 3600\nexampledomain.com. 3600 IN A 64.225.133.254\nexampledomain.com. 3600 IN NS cloud-dns1.3Engines.com.\nexampledomain.com. 3600 IN NS cloud-dns2.3Engines.com.\nexampledomain.com. 3600 IN NS cloud-dns3.3Engines.com.\n</code></pre> <p>Note</p> <p>Each time a name of domain or a server is added or edited, add dot \u2018.\u2019 at the end of the entry. For example: exampledomain.com. or mail.exampledomain.com..</p>"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html#step-5-adding-records-for-subdomains","title":"Step 5 Adding records for subdomains\ud83d\udd17","text":"<p>Defining subdomains is similar except that, normally, the subdomain would propagate within minutes instead of days.</p> <p>As previously, use command is DNS -&gt; Zones -&gt; Record Sets.</p> <p>To configure an address of web server in exampledomain.com, so that www.exampledomain.com is resolved to 64.225.133.254 which is a Floating IP address of your server, fill the form as follows:</p> <p></p> <p>Submit the form and check whether your configuration works:</p> <pre><code>dig -t any +noall +answer www.exampledomain.com @cloud-dns1.3Engines.com\nwww.exampledomain.com. 3600 IN A 64.225.133.254\n</code></pre>"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html#step-6-managing-records","title":"Step 6 Managing records\ud83d\udd17","text":"<p>Anytime you want to review, edit or delete records in your domain, visit OpenStack dashboard, Project \u2192 DNS \u2192 Zones. After clicking the domain name of your interest, choose Record Sets tab and see the list of all records:</p> <p></p> <p>From this screen you can update or delete records.</p>"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html#limitations","title":"Limitations\ud83d\udd17","text":"<p>There are the following limitations in OpenStack DNSaaS:</p> <ul> <li>You cannot manage NS records for your domain. Therefore</li> </ul> <ul> <li>you cannot add additional secondary name servers</li> <li>you are unable to delegate subdomains to external servers</li> <li>Even though you are able to configure reverse DNS for your domain, this configuration will have no effect since reverse DNS for 3Engines Cloud IP pools are managed on DNS servers other than OpenStack DNSaaS.</li> </ul>"},{"location":"cloud/DNS-as-a-Service-on-3Engines-Cloud-Hosting.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Once an OpenStack object has floating IP address, you can use the DNS service to propagate a domain name and, thus, create a service or a site. There are several situations in which you can create a floating IP address:</p> You already have an existing VM Follow the procedure in article How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud to assign a new floating IP to it. Assign floating IP while creating a new VM from scratch That is the approach in articles from Prerequisite No. 5. Kubernetes services can have an automatically assigned floating IP The following article shows how to deploy an HTTPS service on Kubernetes: <p>Deploying HTTPS Services on Magnum Kubernetes in 3Engines Cloud Cloud</p>"},{"location":"cloud/Dashboard-Overview-Project-Quotas-And-Flavors-Limits-on-3Engines-Cloud.html.html","title":"Dashboard Overview \u2013 Project Quotas And Flavors Limits on 3Engines Cloud\ud83d\udd17","text":"<p>While using 3Engines Cloud platform, one of the first things you will spot is the \u201cLimit Summary\u201d. Each project is restricted by preset quotas. This is preventing system capacities from being exhausted without notification and guaranteeing free resources.</p> <p>On the first screen after logging into Horizon Dashboard you will see seven charts reflecting limits most essential to the stability of the platform. You can always show this screen with command Compute -&gt; Overview.</p> <p></p> Instances Number of virtual machines your project can contain at once, regardless of the flavors (it could be eo1.xsmall as well as hm.2xlarge). VCPUs Number of cores you can assign while launching VMs of different flavors (it varies for each flavor, for example eo1.xsmall has only one core, while eo1.large has four VCPUs). RAM The amount of RAM you have available in you project according to the assigned quota. Floating IPs A pool of unique Floating IP addresses assigned only to your project. Security Groups Two of the ten Security Groups are preset during the creation of the domain (one is a default group and the second one allows connection via SSH and RDP and pinging instances). Volumes Disks provided only by flavor (unchecked \u201cCreate new volume\u201d while instances creation) won\u2019t count in. Volume Storage You can store data from your instances and disposable volumes. <p>During the VM creation process, while choosing flavor, you may spot a yellow exclamation mark next to some values. It means that the remaining resources are exceeding the quota, or (more rarely) that this flavor is unavailable due to some maintenance reason.</p> <p></p> <p>You can expand the flavor summary by clicking the arrow on the left. The charts will show the current free resources as well as the resources that will remain after creating a new instance.</p> <p>If the quota would be exceeded, OpenStack will non allow to choose this particular flavor.</p>"},{"location":"cloud/How-To-Create-a-New-Linux-VM-With-NVIDIA-Virtual-GPU-in-the-OpenStack-Dashboard-Horizon-on-3Engines-Cloud.html.html","title":"How To Create a New Linux VM With NVIDIA Virtual GPU in the OpenStack Dashboard Horizon on 3Engines Cloud\ud83d\udd17","text":"<p>You can create Linux virtual machine with NVIDIA RTX A6000 as the additional graphics card. The card contains</p> <ul> <li>10,752 CUDA cores for rendering, graphics operations and heavy parallel computations,</li> <li>336 Tensor cores which accelerate AI and data science model training, while</li> <li>84 RT cores speed up ray tracing with shading or denoising, photorealistic rendering and so on.</li> </ul> <p>There are four variants, using 6, 12, 24, or 48 GB of VGPU RAM. You will be able to select the particular model by choosing a proper flavor when creating the instance in Horizon (see below).</p>"},{"location":"cloud/How-To-Create-a-New-Linux-VM-With-NVIDIA-Virtual-GPU-in-the-OpenStack-Dashboard-Horizon-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>How to create an instance with NVIDIA support</li> <li>How to choose the proper flavor for the data in hand</li> <li>How to add proper keypair in order to</li> <li>SSH into the virtual machine you create, or</li> <li>use the console within Horizon interface and</li> <li>verify that you are using the NVIDIA vGPU.</li> </ul>"},{"location":"cloud/How-To-Create-a-New-Linux-VM-With-NVIDIA-Virtual-GPU-in-the-OpenStack-Dashboard-Horizon-on-3Engines-Cloud.html.html#step-1-create-new-instance-with-nvidia-image-support","title":"Step 1 Create New Instance with NVIDIA Image Support\ud83d\udd17","text":"<p>To define a new instance, use the following series of commands:</p> <p>Project \u2192 Compute \u2192 Instances.</p> <p></p> <p>Click Launch Instance to get the following screen:</p> <p></p> <p>Insert the name of the instance (eg. \u201cvm_with_vgpu\u201d) and click Next button. In the next screen, you will choose the operating system for the new virtual machine you are defining:</p> <p></p> <p>Your goal is to use an image with predefinced NVIDIA support. To list all such images, click on field Available and enter \u2018NVIDIA\u2019 into it. Only the images with NVIDIA in their names will be listed:</p> <p></p> <p>Select Instance Boot Source (eg. \u201cImage\u201d), and choose desired image (eg. \u201cUbuntu 20.04 NVIDIA\u201d) by clicking on arrow.</p> <p>Images marked with \u201cNVIDIA\u201d are fully operational. They come preinstalled with</p> <ul> <li>special NVIDIA Grid drivers</li> <li>a licence token, as well as</li> <li>the CUDA library.</li> </ul> <p>Note</p> <p>If you do not need to have the system disk bigger than the size defined in a chosen flavor, we recommend setting \u201cCreate New Volume\u201d feature to \u201cNo\u201d state.</p> <p>Click on the Next button and get to the following screen:</p> <p></p> <p>You will now choose one of the four models of the RTX A6000 card.</p>"},{"location":"cloud/How-To-Create-a-New-Linux-VM-With-NVIDIA-Virtual-GPU-in-the-OpenStack-Dashboard-Horizon-on-3Engines-Cloud.html.html#step-2-select-card-model-flavor","title":"Step 2 Select Card Model / Flavor\ud83d\udd17","text":"<p>The four available: RTXA6000-6C, RTXA6000-12C, RTXA6000-24C, and RTXA6000-48C, are described in this table:</p> <p></p> <p>The column VM Name contains flavor names vm.a6000.1, vm.a6000.2, vm.a6000.4, vm.a6000.8. Again, type a6000 into the field Available and list only the NVIDIA flavors:</p> <p></p> <p>Taking into account the data from the table above, if you select flavor vm.a6000.2, you will use 4 virtual cores and 28 GB of \u201cnormal\u201d RAM, and simultaneously, you will also choose the RTXA6000-12C model with 12 GB of virtual GPU RAM.</p> <p>Note</p> <p>Yellow triangles in the listing mean that you cannot select that row as one of the system resources is already engaged to other instances. If you, say, wanted to select the strongest flavor of NVIDIA, vm.a6000.8, you would first have to obtain 112 GB or more of available of RAM and only then be able to opt for that flavor.</p> <p>In the situation above, select vm.a6000.2 and continue going through the usual motions of selecting instance elements to finish the procedure.</p>"},{"location":"cloud/How-To-Create-a-New-Linux-VM-With-NVIDIA-Virtual-GPU-in-the-OpenStack-Dashboard-Horizon-on-3Engines-Cloud.html.html#step-3-finish-creating-the-instance","title":"Step 3 Finish Creating the Instance\ud83d\udd17","text":"<p>Click \u201cNetworks\u201d and then choose desired networks.</p> <p></p> <p>Open \u201cSecurity Groups\u201d After that, choose \u201callow_ping_ssh_icmp_rdp\u201d and \u201cdefault\u201d.</p> <p></p> <p>Choose or generate SSH keypair, as explained in article How to create key pair in OpenStack Dashboard on 3Engines Cloud for your VM. Next, launch your instance by clicking on blue button.</p> <p></p> <p>You will see \u201cInstances\u201d menu with your newly created VM.</p> <p></p> <p>Note</p> <p>If you want to make your VM accessible from the Internet, see this article: How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud</p>"},{"location":"cloud/How-To-Create-a-New-Linux-VM-With-NVIDIA-Virtual-GPU-in-the-OpenStack-Dashboard-Horizon-on-3Engines-Cloud.html.html#step-4-issue-commands-from-the-console","title":"Step 4 Issue Commands from the Console\ud83d\udd17","text":"<p>Open the drop-down menu and choose \u201cConsole\u201d.</p> <p></p> <p>You can connect to your virtual machine using SSH, see this article: How to connect to your virtual machine via SSH in Linux on 3Engines Cloud</p> <p>You can also use the SPICE console using the Openstack Dashboard.</p> <p>Click on the black terminal area (to activate access to the console). Type:</p> <pre><code>eoconsole\n</code></pre> <p>and hit Enter on the keyboard.</p> <p></p> <p>Insert and retype new password.</p> <p></p> <p>Now you can type commands.</p> <p></p> <p>To check the status of the vGPU device, enter the command:</p> <pre><code>nvidia-smi\n</code></pre> <p></p> <p>After you finish, type \u201cexit\u201d.</p> <pre><code>exit\n</code></pre> <p></p> <p>This will close the session.</p>"},{"location":"cloud/How-to-access-the-VM-from-OpenStack-console-on-3Engines-Cloud.html.html","title":"How to access the VM from OpenStack console on 3Engines Cloud\ud83d\udd17","text":"<p>Once you have created a virtual machine in OpenStack, you will need to perform various administrative tasks such as:</p> <ul> <li>installing and uninstalling software,</li> <li>uploading and downloading files,</li> <li>setting up passwords and access policies</li> </ul> <p>and so on. There are three ways to enter the back end part of virtual machines:</p> Linux For Linux, either of the Ubuntu or CentOS variety, you will be using the console that is present with every VM. You enter the console as a predefined user called eoconsole, define the password and switch to another, also predefined, user called eouser. After that, each time you are use the console, you will be present as eouser. Windows You only need to create an Administrator profile within the virtual machine. Once you do that, you will work with Windows just like on the desktop computer, with possible delays in response due to the speed of Internet connection you have. Fedora <p>Fedora images technically belong to the family of Linux operating systems, but cannot be accessed via the console. To be more precise, you will see the console but won\u2019t be able to log in as the standard eoconsole user.</p> <p>As these images are only used for automatic creation of Kubernetes instances, you have to enter them using Kubernetes methods. That boils down to using kubectl exec command (see below).</p>"},{"location":"cloud/How-to-access-the-VM-from-OpenStack-console-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Use console for Linux based virtual machines</li> <li>Use console for Windows based virtual machines</li> <li>Use console for Fedora base virtual machines</li> </ul>"},{"location":"cloud/How-to-access-the-VM-from-OpenStack-console-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p>"},{"location":"cloud/How-to-access-the-VM-from-OpenStack-console-on-3Engines-Cloud.html.html#using-console-for-administrative-tasks-within-linux-based-vms","title":"Using console for administrative tasks within Linux based VMs\ud83d\udd17","text":"<ol> <li>Go to https://horizon.3Engines.com and select your authentication method:</li> </ol> <p>You will enter the Horizon main screen.</p> <ol> <li>Open the Compute/Instances tab and select the desired VM by clicking on its name:</li> </ol> <p></p> <ol> <li>Select \u201cConsole\u201d pane</li> </ol> <p></p> <ol> <li>When logging in for the first time, you will see the generic console screen:</li> </ol> <p></p> <p>Click on link Click here to show only console and then click on the console surface to make it active.</p> <p>Enter the predefined user name eoconsole. You will be asked to set up a new password, twice. This user eoconsole serves only for you to enter the console.</p> <p>The next step is to start using another user, called eouser. Just like eoconsole, it is already defined for you, so you only need to switch from eoconsole to eouser. Linux command to do that is</p> <pre><code>sudo su - eouser\n</code></pre> <p></p> <p>You will then use the console as a predefined user called eouser.</p> <p>Attention</p> <p>Google Chrome seems to work slowly while using the OpenStack console. Firefox works well.</p>"},{"location":"cloud/How-to-access-the-VM-from-OpenStack-console-on-3Engines-Cloud.html.html#using-console-to-perform-administrative-tasks-within-fedora-vms","title":"Using console to perform administrative tasks within Fedora VMs\ud83d\udd17","text":"<p>For normal VMs, choose either Ubuntu- or CentOS-based images while creating a VM \u2013 but not Fedora. It is meant only for automatic creation of instances that belong to Kubernetes clusters. Such instances will have either word master or node in their names. Here is a typical series of instances that belong to two different clusters, called vault and k8s-23:</p> <p></p> <p>So if you click on any of these Kubernetes instances, you will be able to enter the console but will not be able to use it. In this context, Fedora image is intentionally set up in such a way that you cannot enter it through the console. You will, typically, see this after Fedora starts:</p> <p></p> <p>Instead, it is possible to enter one such instance using main Kubernetes command, kubectl with parameter exec. The main command would look like this:</p> <pre><code>kubectl -n vault exec -it vault-0 -- sh\n</code></pre> <p>where vault is the namespace within which the pod vault-0 will be found and entered.</p> <p>Further explanations of exec command are out of scope of this article. The following article will show you how to activate the kubectl command after the cluster has been created:</p> <p>How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p> <p>This article shows an example of an exec command to enter the VM and, later, save the data within it:</p> <p>Volume-based vs Ephemeral-based Storage for Kubernetes Clusters on 3Engines Cloud OpenStack Magnum</p>"},{"location":"cloud/How-to-access-the-VM-from-OpenStack-console-on-3Engines-Cloud.html.html#performing-administrative-tasks-within-windows-based-vms","title":"Performing administrative tasks within Windows based VMs\ud83d\udd17","text":"<p>In case of Windows set a new password for Administrator profile.</p> <p></p> <p>You will then be able to perform administrative tasks on your instance.</p>"},{"location":"cloud/How-to-clone-existing-and-configured-VMs-on-3Engines-Cloud.html.html","title":"How to clone existing and configured VMs on 3Engines Cloud\ud83d\udd17","text":"<p>The simplest way to create the snapshot of your machine is using \u201cHorizon\u201d - graphical interface of OpenStack dashboard.</p> <p>In summary, there will be 2 operations:</p> <ol> <li>Creating snapshot</li> <li>Restoring snapshot to newly created VM.</li> </ol> <p>To start, please visit our website https://horizon.3Engines.com and login.</p> <p></p> <p>After logon, in \u201cInstances\u201d menu select VM to be cloned, and create its snapshot by clicking \u201cActions\u201d Menu</p> <p></p> <p>Once the snapshot is ready, you may see it on \u201cImages\u201d page of Horizon. Select its name to see properties.</p> <p></p> <p>Now, you may click \u201cLaunch\u201d in right upper corner of the window or just go back to \u201cInstances\u201d menu and launch new instance.</p> <p>Full manual is here: How to create new Linux VM in OpenStack Dashboard Horizon on 3Engines Cloud</p> <p>But if this process is familiar to you, there is only one difference. Chose as the source \u201cboot from snapshot\u201d instead of \u201cboot from image\u201d and select your snapshot from the list below. In next steps select parameters (flavour, size), at least the same as the original one. (\u201cLaunch instance\u201d button will be unavailable until all necessary settings were completed).</p> <p>The new machine gets configured as a clone of the original one, except of network addresses (new floating-ip must be associated) and network policies.</p> <p>Caution</p> <p>If the original machine had any additional volumes attached to it, they should also be cloned.</p> <p>You may also want to read: Volume snapshot inheritance and its consequences on 3Engines Cloud.</p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html","title":"How to create Windows VM on OpenStack Horizon and access it via web console on 3Engines Cloud\ud83d\udd17","text":"<p>This article provides a straightforward way of creating a functional Windows VM on 3Engines Cloud cloud, using the Horizon graphical interface.</p> <p>The idea is to</p> <ul> <li>start the creation of a Windows virtual machine from the default Horizon dashboard and then</li> <li>access it via the web console,</li> </ul> <p>all from your Internet browser.</p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#what-are-we-going-to-cover","title":"What Are We Going To Cover\ud83d\udd17","text":"<ul> <li>Accessing the Launch Instance menu</li> <li>Choosing the Instance name</li> <li>Choosing source</li> <li>Choosing flavor</li> <li>Choosing networks</li> <li>Choosing security groups</li> <li>Launching virtual machine</li> <li>Setting Administrator password</li> </ul>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#step-1-access-the-launch-instance-menu","title":"Step 1: Access the Launch Instance menu\ud83d\udd17","text":"<p>In the Horizon dashboard, navigate to Compute -&gt; Instances. Click the Launch Instance at the top of the Instances section:</p> <p></p> <p>You should get the following window:</p> <p></p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#step-2-choose-the-instance-name","title":"Step 2: Choose the instance name\ud83d\udd17","text":"<p>In the window which appeared, enter the name you wish to give to your instance in the Instance Name text field. In this example, we use test-windows-vm as the name:</p> <p></p> <p>Click Next &gt;.</p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#step-3-choose-source","title":"Step 3: Choose source\ud83d\udd17","text":"<p>The default value in the drop-down menu Select Boot Source is Image, meaning that you will choose from one of the images that are present in your version of Horizon. If another value is selected, revert to Image instead.</p> <p></p> <p>Enter windows in the search field in the Available section to filter Windows images:</p> <p></p> <p>Choose the newest available version by clicking \u2191 next to it. As of writing of this article, it is Windows 2022.</p> <p></p> <p>Your chosen image should appear in the Allocated section:</p> <p></p> <p>Click Next &gt;.</p> <p>If you allocate a wrong image by mistake, you can remove it from the Allocated section by clicking \u2193 next to its name.</p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#step-4-choose-flavor","title":"Step 4: Choose flavor\ud83d\udd17","text":"<p>In this step you will choose the flavor of your virtual machine. Flavors manage access to resources such as VCPUS, RAM and storage.</p> <p>The following screenshot shows what the flavors table looks like in general:</p> <p></p> <p>The presence of yellow warning triangles means that the flavor in that row is unavailable to you. To see the exact reason for this unavailability, hover your mouse over that triangle, like so:</p> <p></p> <p>Here are the flavors which you can choose when creating a Windows virtual machine in each particular cloud:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <p>On WAW4-1 cloud there are no flavors which have the name containing with w so it cannot be used to run Windows.</p> <p>On WAW3-1 cloud, only flavors which have the name starting with hmw can be used to run Windows.</p> <p>Filter them by entering hmw in the search bar in the Available section:</p> <p></p> <p>On WAW3-2 cloud, only flavors which have the name starting with hmaw can be used to run Windows.</p> <p>Filter them by entering hmaw in the search bar in the Available section:</p> <p></p> <p>On FRA1-2 cloud, only flavors which have the name starting with hmw can be used to run Windows.</p> <p>Filter them by entering hmw in the search bar in the Available section:</p> <p></p> <p>Always be sure to check the actual flavors (that info can be found in the Horizon dashboard).</p> <p>Choose the flavor which suits you best and click \u2191 next to it to allocate it.</p> <p>Click Next &gt;.</p> <p>Note</p> <p>In examples that follow, we use two networks, one with name starting with cloud_ and the name of the other starting with eodata_. The former network should always be present in the account, but the latter may or may not present. If you do not have network which name starts with eodata_, you may create it or use any other network that you already have and want to use.</p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#step-5-attach-networks-to-your-virtual-machine","title":"Step 5: Attach networks to your virtual machine\ud83d\udd17","text":"<p>The next step contains the list of networks available to you:</p> <p></p> <p>By default, you should have access to the following networks:</p> <ul> <li>A network which has the same name as your project - it can be used to connect your virtual machines together and access the Internet.</li> <li>The network which has eodata in its name - it can be used to access the EODATA repository containing Earth observation data.</li> </ul> <p>Allocate both of them and click Next &gt;.</p> <p>The next step is called Network Ports. In it simply quick Next &gt; without doing anything else.</p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#step-6-choose-security-groups","title":"Step 6: Choose security groups\ud83d\udd17","text":"<p>Security groups control Internet traffic for your virtual machine.</p> <p>In this step, make sure that the default security group is allocated. It blocks incoming network traffic and allows outgoing network traffic.</p> <p>Group allow_ping_ssh_icmp_rdp exposes your VM to various types of network traffic but here, do not allocate it. It is not needed for the purposes of this article, since you will only access your virtual machine using the web console. You should still be able to perform standard Windows operations such as browsing the Internet or accessing e-mail without this security group.</p> <p></p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#step-7-launch-your-virtual-machine","title":"Step 7: Launch your virtual machine\ud83d\udd17","text":"<p>Other steps from the Launch Instance window are optional. Once you have done the previous steps of this article, click Launch Instance button:</p> <p></p> <p>Your virtual machine should appear in the Instances section of the Horizon dashboard. Wait until its Status is Active:</p> <p></p> <p>Once the Status is Active, the virtual machine should be running. The next step involves setting access to it.</p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#step-8-set-the-administrator-password","title":"Step 8: Set the Administrator password\ud83d\udd17","text":"<p>Once your instance has Active status, click on its name:</p> <p></p> <p>You should see a page containing information about your instance. Navigate to Console tab:</p> <p></p> <p>You should see the web console with which you can control your virtual machine. When the system finishes startup, you will see prompt to set the Administrator password:</p> <p></p> <p>Click OK. You should now see two text fields:</p> <p></p> <p>Enter your chosen password in the New password text field.</p> <p>Enter it again in the Confirm password text field.</p> <p>Click right arrow next to the Confirm password text field:</p> <p></p> <p>You should get the following confirmation:</p> <p></p> <p>Click OK.</p> <p>Wait until you see the standard Windows desktop.</p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#step-9-update-windows","title":"Step 9: Update Windows\ud83d\udd17","text":"<p>Once the Windows virtual machine is up and running, you should update its operating system to have the latest security fixes. Click Start, and then Settings:</p> <p></p> <p>After that, click Update &amp; Security:</p> <p></p> <p>You should now see Windows Update screen, which can look like this:</p> <p></p> <p>Follow the appropriate prompts to update your operating system.</p>"},{"location":"cloud/How-to-create-Windows-VM-on-OpenStack-Horizon-and-access-it-via-web-console-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>If you want to access your virtual machine remotely using RDP (Remote Desktop Protocol), you should consider increasing its security by using a bastion host. The following article contains more information: Connecting to a Windows VM via RDP through a Linux bastion host port forwarding on 3Engines Cloud</p> <p>To learn more about security groups, you can check this article: How to use Security Groups in Horizon on 3Engines Cloud</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html","title":"How to create a Linux VM and access it from Linux command line on 3Engines Cloud\ud83d\udd17","text":"<p>Creating a virtual machine in a 3Engines Cloud cloud allows you to perform computations without having to engage your own infrastructure. In this article you shall create a Linux based virtual machine and access it remotely from a Linux command line on a desktop or laptop.</p> <p>If you want to access Linux VM from a Windows based command line, follow this article instead: How to create a Linux VM and access it from Windows desktop on 3Engines Cloud.</p> <p>Note</p> <p>This article only covers the basics of creating a VM - it does not cover topics such as use of NVIDIA hardware or creating a volume during the creation of a VM.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Creating a Linux virtual machine in 3Engines Cloud cloud using command Launch Instance from Horizon Dashboard</li> </ul> <p>You will enter the following required data into that window:</p> <ul> <li>Instance name</li> <li>Instance source (from an operating system image)</li> <li>Instance flavor (the combination of CPU, memory and storage capacity)</li> <li>Networks that the newly created VM will use</li> </ul> <p>Then create elements later needed for SSH connection:</p> <ul> <li>Security groups to control access to the machine and</li> <li>A key pair for SSH access to the Linux based VM in the cloud</li> </ul> <p>For external access</p> <ul> <li>Attach a floating IP to the instance so that it can be found on the Internet and, finally,</li> <li>Use SSH to connect to that virtual machine from another Linux based system</li> </ul>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Basic knowledge of Linux terminal</p> <p>You should have some experience with Linux command line interface.</p> <p>No. 3 Linux installed on your local computer</p> <p>A Linux distribution running on your computer. This article was written for Ubuntu 20.04 LTS so please adjust the commands to your version of Linux.</p> <p>No. 4 SSH client installed and configured on your local Linux computer</p> <p>The SSH client must be installed and configured on your local Linux computer. Please see Generating an SSH keypair in Linux on 3Engines Cloud.</p> <p>If you already have an SSH key pair and an SSH client configured, you should import your public key to the Horizon dashboard. The following article contains information how to do it: How to import SSH public key to OpenStack Horizon on 3Engines Cloud.</p> <p>Alternatively, you can also create a key pair directly in the Horizon:</p> <p>How to create key pair in OpenStack Dashboard on 3Engines Cloud.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#options-for-creation-of-a-virtual-machine-vm","title":"Options for creation of a Virtual Machine (VM)\ud83d\udd17","text":"<p>Creation of a virtual machine is divided into 11 sections, four of which are mandatory (denoted by an asterisk in the end of the name of the option). In addition to those four (Details, Source, Flavor, and Networks), we shall define Security Groups and Key Pairs. The rest of the options to launch an instance is out of scope of this article.</p> <p>Note</p> <p>In OpenStack terminology, a virtual machine is also an instance. Instance is a broader term as not all instances need be virtual machines, it is also possible to use real hardware as an instance.</p> <p>The window to create a virtual machine is called Launch Instance. You will enter all the data about an instance into that window and its options.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#step-1-start-the-launch-instance-window-and-name-the-virtual-machine","title":"Step 1 Start the Launch Instance window and name the virtual machine\ud83d\udd17","text":"<p>In the Horizon dashboard go to Compute -&gt; Instances and click Launch Instance. You should get the following window:</p> <p></p> <p>Type the name for your virtual machine in the Instance Name text field.</p> <p>Click Next or the Source option on the left side menu.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#step-2-define-the-source-of-the-virtual-machine","title":"Step 2 Define the source of the virtual machine\ud83d\udd17","text":"<p>The Source window appears:</p> <p></p> <p>Make sure that from the drop-down menu Select Boot Source option Image is selected.</p> <p></p> <p>From the Available list choose Linux distribution that suits you best and click \u2191 next to it. It should now be visible in the Allocated section:</p> <p></p> <p>This image shows that a Ubuntu 20.04 LTS was selected; if you, however, chose CentOS 7, that is what would show here instead of Ubuntu 20.04 LTS.</p> <p>If you change your mind, click \u2193 to unselect a source and then choose a different one.</p> <p>Images which have NVIDIA in their name contain NVIDIA hardware. This article does not cover their use. Therefore, make sure that you choose the image without it.</p> <p>Also, make sure that in the section Create New Volume option No is selected.</p> <p>Click Next or click on button Flavor to define the flavor of the instance.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#step-3-define-the-flavor-of-the-instance","title":"Step 3 Define the flavor of the instance\ud83d\udd17","text":"<p>You should now see the following form:</p> <p></p> <p>The standard definition of OpenStack flavor is the amount of resources available to the instance - like VCPU, memory and storage capacity.</p> <p>Choose the one which suits you best and click \u2191 next to it.</p> <p>Make sure that you do not select one of the below flavors - they contain NVIDIA hardware and this article does not cover their use.</p> <ul> <li>vm.a6000.1</li> <li>vm.a6000.2</li> <li>vm.a6000.4</li> <li>vm.a6000.8</li> </ul> <p>Sometimes, a flavor might be insufficient for source you chose in the previous step. If this is the case, you will see a yellow warning sign next to at least one of the values in the row for that flavor:</p> <p></p> <p>To solve this issue, choose a flavor that supports your chosen source instead. In the image above, vm.a6000.4 is not available but, say, hm.large is.</p> <p>Another possible explanation might be that your quota is too low for creating a VM with your chosen flavor. You can see your quota in the Compute -&gt; Overview section of your Horizon dashboard. If that is the case, you can either:</p> <ul> <li>choose a different flavor or</li> <li>contact the 3Engines Cloud Support to request quota increase - Helpdesk and Support.</li> </ul> <p>Click Next or click Networks to define networks.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#step-4-define-networks-for-the-virtual-machine","title":"Step 4 Define networks for the virtual machine\ud83d\udd17","text":"<p>You should now see the following window:</p> <p></p> <p>Here you can select networks that will be attached to your virtual machine. They control the way your machine is connected to the Internet, to the other machines and to other resources as well.</p> <p>By default, you should have access to the network whose name starts with cloud_, which allows you to connect your machines together. It also has access to the external network which gives the instance access to the Internet.</p> <p>Choose that network and also choose any other network that you want to access through the newly created VM.</p> <p>These were the obligatory options. Since you want to access the instance through an SSH connection, you will need to define Security Groups and Key Pair.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#step-5-define-security-groups-for-vm","title":"Step 5 Define security groups for VM\ud83d\udd17","text":"<p>Security groups control network traffic to and from your virtual machine.</p> <p>Click Security Groups. You should see the following form:</p> <p></p> <p>By default, you have access to two groups:</p> <ul> <li>default which blocks all incoming traffic and allows all outgoing traffic</li> <li>allow_ping_ssh_icmp_rdp which allows incoming Ping, SSH, ICMP and RDP connections</li> </ul> <p>Enable both of these rules. One of the open ports in allow_ping_ssh_icmp_rdp is 22, which is a prerequisite for SSH access.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#step-6-create-a-key-pair-for-ssh-access","title":"Step 6 Create a key pair for SSH access\ud83d\udd17","text":"<p>To use SSH to connect your local Linux computer to the cloud Linux \u201ccomputer\u201d, you will need to provide one public and one secret key. (Keys are random strings, usually hundreds of characters long.)</p> <p>Click Key Pair. You should now see the following window:</p> <p></p> <p>In the image above, the key is called test-key. There are three ways to enter the keys into this window:</p> <ul> <li>using option Create Key Pair \u2013 create it on the spot,</li> <li>using option Import Key Pair \u2013 take the keys you already have and upload them to the cloud,</li> <li>using one of the key pairs that were already existing within OpenStack cloud.</li> </ul> <p>If you haven\u2019t created your key pair yet, please follow Prerequisite No. 4.</p> <p>Anyways, make sure that your uploaded key is in the Allocated section.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#step-7-create-the-instance","title":"Step 7 Create the instance\ud83d\udd17","text":"<p>Once you have set everything up, click Launch Instance.</p> <p>Your instance should now be in the Instances list. Initially, the instance will be in a state of \u201cSpawning\u201d as in this image:</p> <p></p> <p>Spawning is the process of preparing the instance.</p> <p>Wait up to a few minutes until your instance has finished spawning. The next state is Running label in the Power State column:</p> <p></p> <p>It means that the instance is ready to use.</p> <p>In Step 4 you have attached a network with the name that starts with cloud_. It allows the instance to send and receive data from other instances in the cloud and the Internet but does not automatically provide a static IP address. Such address is important if you want to host a website or access the instance via the SSH protocol.</p> <p>Just like on the above screenshot, under header IP Address, you will see network addresses which both start with 10.. It means that they are local network addresses. If you want to access your instance remotely, it must have a static IP address. The way to add it is to attach a so-called floating IP address to the instance.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#step-8-attach-a-floating-ip-to-the-instance","title":"Step 8 Attach a Floating IP to the instance\ud83d\udd17","text":"<p>Here is how to create and attach a floating IP to your instance: How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud.</p> <p>Once you have added the floating IP, you will see it in the Horizon dashboard under header IP Address - just like in the last image from that article:</p> <p></p> <p>The floating IP address in that article is 64.225.132.0. Your address will vary.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#step-9-connecting-to-your-virtual-machine-using-ssh","title":"Step 9 Connecting to your virtual machine using SSH\ud83d\udd17","text":"<p>The following article has information about connecting to a virtual machine using SSH: How to connect to your virtual machine via SSH in Linux on 3Engines Cloud.</p> <p>The last command in that article was:</p> <pre><code>ssh [email\u00a0protected]\n</code></pre> <p>The IP address in that article is 64.225.132.99 and is different from the address from the previous article. Instead of IP addresses used in these articles (64.225.132.99 and 64.225.132.0), enter the IP address of your instance which you saw after doing Step 8.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Linux-command-line-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>3Engines Cloud cloud can be used for general hosting needs, such as</p> <ul> <li>installing LAMP servers,</li> <li>installing and using WordPress servers,</li> <li>email servers,</li> <li>Kubernetes and SLURM clusters and so on.</li> </ul> <p>To create a cluster of instances, see the series of articles on Kubernetes:</p> <p>How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum.</p> <p>If you find yourself unable to connect to your virtual machine using SSH, you can use the web console for troubleshooting and other purposes. Here\u2019s how to do it:</p> <p>How to access the VM from OpenStack console on 3Engines Cloud</p> <p>If you don\u2019t want the storage of your instance to be deleted while the VM is removed, you can choose to use a volume during instance creation. Please see the following articles:</p> <p>VM created with option Create New Volume No on 3Engines Cloud</p> <p>VM created with option Create New Volume Yes on 3Engines Cloud.</p> <p>You can\u2019t apply the SSH keys uploaded to the Horizon dashboard directly to a VM after its creation. The following article presents a walkaround to this problem:</p> <p>How to add SSH key from Horizon web console on 3Engines Cloud.</p> <p>If you find that the storage of your VM is insufficient for your needs, you can attach the volume to it after its creation. The following articles contain appropriate instructions: How to attach a volume to VM less than 2TB on Linux on 3Engines Cloud and How to attach a volume to VM more than 2TB on Linux on 3Engines Cloud.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html","title":"How to create a Linux VM and access it from Windows desktop on 3Engines Cloud\ud83d\udd17","text":"<p>Creating a virtual machine in a 3Engines Cloud cloud allows you to perform computations without having to engage your own infrastructure. In this article you shall create a Linux based virtual machine and access it remotely using PuTTY on Windows.</p> <p>If you want to access Linux VM from a Linux command line, follow this article instead: How to create a Linux VM and access it from Linux command line on 3Engines Cloud.</p> <p>Note</p> <p>This article only covers the basics of creating a VM - it does not cover topics such as use of NVIDIA hardware or creating a volume during the creation of a VM.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Creating a Linux virtual machine in 3Engines Cloud cloud using command Launch Instance from Horizon Dashboard</li> </ul> <p>You will enter the following data into that window:</p> <ul> <li>Instance name</li> <li>Instance source (from an operating system image)</li> <li>Instance flavor (the combination of CPU, memory and storage capacity)</li> <li>Networks that the newly created VM will use</li> </ul> <p>Then create elements later needed for SSH connection:</p> <ul> <li>Security groups to control access to the machine</li> <li>A chosen key pair for SSH access to the Linux based VM in the cloud</li> </ul> <p>For external access</p> <ul> <li>Attach a floating IP to the instance so that it can be found on the Internet.</li> </ul> <p>After that, you will connect to a VM using PuTTY:</p> <ul> <li>Convert the public key to the format compatible with PuTTY</li> <li>Configure PuTTY</li> <li>Save PuTTY configuration</li> <li>Connect to a VM</li> </ul>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Basic knowledge of Linux terminal</p> <p>You should have some experience with Linux command line interface.</p> <p>No. 3 Windows</p> <p>You need to have Microsoft Windows 10 or newer installed on your computer.</p> <p>No. 4 PuTTY installed on your local Windows computer</p> <p>You should have PuTTY installed on your computer. You can download it from the following website: https://www.chiark.greenend.org.uk/~sgtatham/putty/.</p> <p>No. 5 SSH key</p> <p>You need to have an SSH key pair. It consists of a public and private key. You can use your existing pair in this workflow or create a new one. If you do not have one, you have several options, such as:</p> <ul> <li> <p>Generate them directly using the Horizon dashboard: How to create key pair in OpenStack Dashboard on 3Engines Cloud.</p> </li> <li> <p>Generate your key pair using the Windows command line. Please check this article: How to Create SSH Key Pair in Windows 10 On 3Engines Cloud. If you choose that option, make sure that you upload your public key to the Horizon dashboard: How to import SSH public key to OpenStack Horizon on 3Engines Cloud.</p> </li> </ul> <p>This article contains information about configuring PuTTY using one such key pair.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#options-for-creation-of-a-virtual-machine-vm","title":"Options for creation of a Virtual Machine (VM)\ud83d\udd17","text":"<p>Creation of a virtual machine is divided into 11 sections, four of which are mandatory (denoted by an asterisk in the end of the name of the option). In addition to those four (Details, Source, Flavor, and Networks), we shall define Security Groups and Key Pairs. The rest of the options to launch an instance is out of scope of this article.</p> <p>Note</p> <p>In OpenStack terminology, a virtual machine is also an instance. Instance is a broader term as not all instances need be virtual machines, it is also possible to use real hardware as an instance.</p> <p>The window to create a virtual machine is called Launch Instance. You will enter all the data about an instance into that window.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-1-start-the-launch-instance-window-and-name-the-virtual-machine","title":"Step 1 Start the Launch Instance window and name the virtual machine\ud83d\udd17","text":"<p>In the Horizon dashboard go to Compute -&gt; Instances and click Launch Instance. You should get the following window:</p> <p></p> <p>Type the name for your virtual machine in the Instance Name text field.</p> <p>Click Next or the Source option on the left side menu.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-2-define-the-source-of-the-virtual-machine","title":"Step 2 Define the source of the virtual machine\ud83d\udd17","text":"<p>The Source window appears:</p> <p></p> <p>Make sure that from the drop-down menu Select Boot Source option Image is selected.</p> <p></p> <p>From the Available list choose Linux distribution that suits you best and click \u2191 next to it. It should now be visible in the Allocated section:</p> <p></p> <p>This image shows that a Ubuntu 20.04 LTS was selected; if you, however, chose CentOS 7, that is what would show here instead of Ubuntu 20.04 LTS.</p> <p>If you change your mind, click \u2193 to unselect a source and then choose a different one.</p> <p>Images which have NVIDIA in their name contain NVIDIA hardware. This article does not cover their use. Therefore, make sure that you choose the image without it.</p> <p>Also, make sure that in the section Create New Volume option No is selected.</p> <p>Click Next or click on button Flavor to define the flavor of the instance.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-3-define-the-flavor-of-the-instance","title":"Step 3 Define the flavor of the instance\ud83d\udd17","text":"<p>You should now see the following form:</p> <p></p> <p>The standard definition of OpenStack flavor is the amount of resources available to the instance - like VCPU, memory and storage capacity.</p> <p>Choose the one which suits you best and click \u2191 next to it.</p> <p>Warning</p> <p>Make sure that you do not select one of the below flavors - they contain NVIDIA hardware and this article does not cover their use.</p> <ul> <li>vm.a6000.1</li> <li>vm.a6000.2</li> <li>vm.a6000.4</li> <li>vm.a6000.8</li> </ul> <p>Sometimes, a flavor might be insufficient for source you chose in the previous step. If this is the case, you will see a yellow warning sign next to at least one of the values in the row for that flavor:</p> <p></p> <p>To solve this issue, choose a flavor that supports your chosen source instead. In the image above, vm.a6000.4 is not available but, say, hm.large is.</p> <p>Another possible cause might be that your quota is too low for creating a VM with your chosen flavor. You can see your quota in the Compute -&gt; Overview section of your Horizon dashboard. If that is the case, you can either:</p> <ul> <li>choose a different flavor or</li> <li>contact the 3Engines Cloud Support to request quota increase - Helpdesk and Support.</li> </ul> <p>Click Next or click Networks to define networks.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-4-define-networks-for-the-virtual-machine","title":"Step 4 Define networks for the virtual machine\ud83d\udd17","text":"<p>You should now see a window to choose one or several networks that you want your VM to work with:</p> <p></p> <p>Here you can select networks that will be attached to your virtual machine. They control the way your machine is connected to the Internet, to the other machines and to other resources as well.</p> <p>By default, you should have access to the network whose name starts with cloud_. It</p> <ul> <li>connects your machines together and</li> <li>has access to the external network, which gives the instance access to the Internet.</li> </ul> <p>Other networks may be present in the system.</p> <p>These were the obligatory options. Since you want to access the instance through an SSH connection, you will need to define Security Groups and Key Pair.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-5-define-security-groups-for-vm","title":"Step 5 Define security groups for VM\ud83d\udd17","text":"<p>Security groups control network traffic to and from your virtual machine.</p> <p>Click Security Groups. You should see the following form:</p> <p></p> <p>By default, you have access to two groups:</p> <ul> <li>default which blocks all incoming traffic and allows all outgoing traffic</li> <li>allow_ping_ssh_icmp_rdp which allows incoming Ping, SSH, ICMP and RDP connections</li> </ul> <p>Enable both of these rules. One of the open ports in allow_ping_ssh_icmp_rdp is 22, which is a prerequisite for SSH access.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-6-create-a-key-pair-for-ssh-access","title":"Step 6 Create a key pair for SSH access\ud83d\udd17","text":"<p>To use SSH to connect your local Linux computer to the cloud Linux \u201ccomputer\u201d, you will need to provide one public and one secret key. (Keys are random strings, usually hundreds of characters long.)</p> <p>Click Key Pair. You should now see the following window:</p> <p></p> <p>In the image above, the key is called test-key. There are three ways to enter the keys into this window:</p> <ul> <li>using option Create Key Pair \u2013 create it on the spot,</li> <li>using option Import Key Pair \u2013 take the keys you already have and upload them to the cloud,</li> <li>using one of the key pairs that were already existing within OpenStack cloud.</li> </ul> <p>If you haven\u2019t created your key pair yet, please follow Prerequisite No. 5.</p> <p>Anyways, make sure that your uploaded key is in the Allocated section.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-7-create-the-instance","title":"Step 7 Create the instance\ud83d\udd17","text":"<p>Once you have set everything up, click Launch Instance.</p> <p>Your instance should now be in the Instances list. Initially, the instance will be in a state of \u201cSpawning\u201d as in this image:</p> <p></p> <p>Spawning is the process of preparing the instance.</p> <p>Wait up to a few minutes until your instance has finished spawning. The next state is Running label in the Power State column:</p> <p></p> <p>It means that the instance is ready to use.</p> <p>In Step 4 you have attached a network with the name that starts with cloud_. It allows the instance to send and receive data from other instances in the cloud and the Internet but does not automatically provide a static IP address. Such address is important if you want to host a website or access the instance via the SSH protocol.</p> <p>Just like on the above screenshot, under header IP Address, you will see network addresses which both start with 10.. It means that they are local network addresses. If you want to access your instance remotely, it must have a static IP address. The way to add it is to attach a so-called floating IP address to the instance.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-8-attach-a-floating-ip-to-the-instance","title":"Step 8 Attach a Floating IP to the instance\ud83d\udd17","text":"<p>Here is how to create and attach a floating IP to your instance: How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud.</p> <p>Once you have added the floating IP, you will see it in the Horizon dashboard under header IP Address - just like in the last image from that article:</p> <p></p> <p>The floating IP address in that article is 64.225.132.0. Your address will vary.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-9-convert-your-ssh-key","title":"Step 9 Convert your SSH key\ud83d\udd17","text":"<p>If you followed Prequisite No. 5, you should have an SSH key pair on your local computer - public and private.</p> <p>In order to connect to a virtual machine using PuTTY, you must first convert your private key to the PuTTY format.</p> <p>If you didn\u2019t install PuTTY yet, please follow Prerequisite No. 4. You should have the following section in your Start menu:</p> <p></p> <p>Choose PuTTYgen. You should get the following window:</p> <p></p> <p>Click Load. A file sector should appear:</p> <p></p> <p>In the lower section of the window there should be a drop-down menu with the option PuTTY Private Key Files already selected. Click it and choose All Files (*.*) instead:</p> <p></p> <p>Find your downloaded private key and click Open.</p> <p>A following message should appear:</p> <p></p> <p>Click OK. You should return to the previous window (PuTTY Key Generator). Click Save private key. You should get the following question:</p> <p></p> <p>Click Yes.</p> <p>In the next window Save private key as: choose the location in which you wish to place your private key. Choose a name for your file and press Save.</p> <p>Close the PuTTY Key Generator window. Your saved file should like like this:</p> <p></p> <p>Of course, your file will probably have a different name than the one on the screenshot above.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-10-configure-putty","title":"Step 10 Configure PuTTY\ud83d\udd17","text":"<p>Run PuTTY from your Start menu. The following window should appear:</p> <p></p> <p>In the text field Host Name (or IP address) type the floating IP of your virtual machine - in this example it is 64.225.133.247:</p> <p></p> <p>In the Category section (in the left part of the window) go to Connection -&gt; SSH -&gt; Auth -&gt; Credentials to authenticate with. You should get the following form:</p> <p></p> <p>Click Browse\u2026 next to the text field Private key file for authentictation.</p> <p>Choose your converted private key.</p> <p>The location of your key should appear in the Private key file for authentication text box:</p> <p></p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-11-save-the-session-settings","title":"Step 11 Save the session settings\ud83d\udd17","text":"<p>To save these settings for future use, return to the Session category in which you typed the floating IP of your virtual machine. Choose the name of your session and type it in the text field found in the Load, save or delete a stored session:</p> <p></p> <p>Click Save. Your saved session should appear on the list:</p> <p></p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#step-12-connect-to-your-virtual-machine","title":"Step 12 Connect to your virtual machine\ud83d\udd17","text":"<p>To connect to your virtual machine, click Open. If you are connecting to that machine for the first time, you should receive the following alert:</p> <p></p> <p>Click Accept.</p> <p>You will be asked for your username:</p> <p></p> <p>Type eouser and press Enter.</p> <p>Note</p> <p>User eouser is the predefined Linux user name on default images on 3Engines Cloud hosting.</p> <p>You should now be connected to your virtual machine and be able to execute commands:</p> <p></p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#using-your-saved-putty-session-to-simplify-login","title":"Using your saved PuTTY session to simplify login\ud83d\udd17","text":"<p>In order to use your saved session, open PuTTY. In the Load, save or delete a stored session, click the name of your saved session from the list and click Load.</p> <p>All your settings, including the floating IP of your VM should now be provided:</p> <p></p> <p>You can now start your session as explained in Step 12 above.</p>"},{"location":"cloud/How-to-create-a-Linux-VM-and-access-it-from-Windows-desktop-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>3Engines Cloud cloud can be used for general hosting needs, such as</p> <ul> <li>installing LAMP servers,</li> <li>installing and using WordPress servers,</li> <li>email servers,</li> <li>Kubernetes and SLURM clusters and so on.</li> </ul> <p>To create a cluster of instances, see the series of articles on Kubernetes:</p> <p>How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum.</p> <p>If you find yourself unable to connect to your virtual machine using SSH, you can use the web console for troubleshooting and other purposes. Here\u2019s how to do it:</p> <p>How to access the VM from OpenStack console on 3Engines Cloud</p> <p>If you don\u2019t want the storage of your instance to be deleted while the VM is removed, you can choose to use a volume during instance creation. Please see the following articles:</p> <p>VM created with option Create New Volume No on 3Engines Cloud</p> <p>VM created with option Create New Volume Yes on 3Engines Cloud.</p> <p>You can\u2019t apply the SSH keys uploaded to the Horizon dashboard directly to a VM after its creation. The following article presents a walkaround to this problem:</p> <p>How to add SSH key from Horizon web console on 3Engines Cloud.</p> <p>If you find that the storage of your VM is insufficient for your needs, you can attach the volume to it after its creation. The following articles contain appropriate instructions: How to attach a volume to VM less than 2TB on Linux on 3Engines Cloud and How to attach a volume to VM more than 2TB on Linux on 3Engines Cloud.</p>"},{"location":"cloud/How-to-create-a-VM-using-the-OpenStack-CLI-client-on-3Engines-Cloud-cloud.html.html","title":"How to create a VM using the OpenStack CLI client on 3Engines Cloud cloud\ud83d\udd17","text":"<p>This article will cover creating a virtual machine on 3Engines Cloud cloud using the OpenStack CLI client exclusively. It contains basic information to get you started.</p>"},{"location":"cloud/How-to-create-a-VM-using-the-OpenStack-CLI-client-on-3Engines-Cloud-cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>The openstack command to create a VM</li> <li>Selecting parameters of the new virtual machine</li> </ul> <ul> <li>Image</li> <li>Flavor</li> <li>Key pair</li> <li>Network(s)</li> <li>Security group(s)</li> </ul> <ul> <li>Creating a virtual machine with CLI only</li> <li>Adding a floating IP to the existing VM</li> <li>Using SSH to access the VM</li> </ul>"},{"location":"cloud/How-to-create-a-VM-using-the-OpenStack-CLI-client-on-3Engines-Cloud-cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 OpenStack CLI client configured</p> <p>To have the OpenStack CLI client configured and operational, see article: How to install OpenStackClient for Linux on 3Engines Cloud.</p> <p>If the command</p> <pre><code>openstack flavor list\n</code></pre> <p>shows a list of flavors, the openstack command is operational.</p> <p>No. 3 Available image to create a new VM from</p> <p>In general, you can create a new virtual machine from these four sources:</p> <ul> <li>operating system image</li> <li>instance snapshot</li> <li>volume</li> <li>volume snapshot</li> </ul> <p>In this article, we will use the first option, an operating system image, as a source of a new virtual machine. There are three ways you can obtain an image:</p> Images that are automatically included on 3Engines Cloud cloud There is a set of images that come predefined with the cloud. Typically, that default list of images will contain Ubuntu, CentOS, and Windows 2019/22 images, with various flavors. Other default images could be available as well, say, for AlmaLinux, OPNSense, OSGeolive, Rocky Linux and so on. Images shared from other projects Under OpenStack, images can be shared between the projects. To have an alien image available in your project, you have to accept it first. Images uploaded within your account <p>Finally, you can upload an image by yourself. Once uploaded, the image will be a first class citizen but it may not be automatically available on other accounts you might have.</p> <p>See this article</p> <p>How to upload your custom image using OpenStack CLI on 3Engines Cloud</p> <p>for an example of uploading a new Debian image to the cloud.</p> <p>No. 4 Available SSH key pair</p> <p>These two articles should help generate and import the SSH key into the cloud:</p> <ul> <li>/networking/Generating-a-sshkeypair-in-Linux-on-3Engines-Cloud and</li> </ul>"},{"location":"cloud/How-to-create-instance-snapshot-using-Horizon-on-3Engines-Cloud.html.html","title":"How to create instance snapshot using Horizon on 3Engines Cloud\ud83d\udd17","text":"<p>In this article, you will learn how to create instance snapshot on 3Engines Cloud cloud, using Horizon dashboard.</p> <p>Instance snapshots allow you to archive the state of the virtual machine. You can, then, use them for</p> <ul> <li>backup,</li> <li>migration between clouds</li> <li>disaster recovery and/or</li> <li>cloning environments for testing or development.</li> </ul> <p>We cover both types of storage for instances, ephemeral and persistent.</p>"},{"location":"cloud/How-to-create-instance-snapshot-using-Horizon-on-3Engines-Cloud.html.html#the-plan","title":"The plan\ud83d\udd17","text":"<p>In reality, you will be using the procedures described in this article with the already existing instances.</p> <p>However, to get a clear grasp of the process, while following this article you are going to create two new instances, one with ephemeral and the other with persistent type of storage. Let their names be instance-which-uses-ephemeral and instance-which-uses-volume. You will create an instance snapshot for each of them.</p> <p>If you are only interested in one of these types of instances, you can follow its respective section of this text.</p> <p>It goes without saying that after following a section about one type of virtual machine you can clean up the resources you created to, say, save costs.</p> <p>Or you can keep them and use them to create an instance out of it using one of articles mentioned in What To Do Next.</p>"},{"location":"cloud/How-to-create-instance-snapshot-using-Horizon-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li> <p>Create snapshot of instance which uses ephemeral storage</p> </li> <li> <p>Navigate to the list of instances in Horizon</p> </li> <li>Shut down the VM</li> <li>Create a snapshot</li> <li>Show what the snapshot will contain for ephemeral storage</li> <li> <p>Create snapshot of instance which uses persistent storage</p> </li> <li> <p>Navigate to the list of instances in Horizon</p> </li> <li>Shut down the VM</li> <li>Create a snapshot</li> <li>What does creating a snapshot of instance with persistent storage do?</li> <li>Exploring instance snapshot and volume snapshots which were created alongside it</li> <li>What happens if there are multiple volumes?</li> <li>Downloading an instance snapshot</li> </ul>"},{"location":"cloud/How-to-create-instance-snapshot-using-Horizon-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Ephemeral storage vs. persistent storage</p> <p>Please see article Ephemeral vs Persistent storage option Create New Volume on 3Engines Cloud to understand the basic difference between ephemeral and persistent types of storage in OpenStack.</p> <p>No. 3 Instance with ephemeral storage</p> <p>You need a virtual machine hosted on 3Engines Cloud cloud.</p> <p>Using any of the following articles will produce an instance with ephemeral storage:</p>"},{"location":"cloud/How-to-create-key-pair-in-OpenStack-Dashboard-on-3Engines-Cloud.html.html","title":"How to create key pair in OpenStack Dashboard on 3Engines Cloud\ud83d\udd17","text":"<p>Open Compute -&gt; Key Pairs</p> <p></p> <p>Click Create Key Pair, insert the name of the key (eg. \u201cssh-key\u201d) and Key Type.</p> <p></p> <p>After generating Key Pair your new Private Key download window will appear. Click Open with Text Editor</p> <p></p> <p></p> <p>Save the key as \u201cid_rsa\u201d in the folder of your choice (in linux the keys are usually kept in ~./ssh folder).</p> <p>In case of linux you should change the permissions on the private key:</p> <pre><code>$ sudo chmod 600 id_rsa\n</code></pre> <p></p> <p>Click key name in Key Pairs menu and read your public key. You can also save the key to a file like the private key. For example named \u201cid_rsa.pub\u201d.</p> <ul> <li>To connect via SSH to your Virtual Machine using Linux, follow the steps in this FAQ:</li> </ul> <p>How to connect to your virtual machine via SSH in Linux on 3Engines Cloud</p> <ul> <li>To connect via SSH to your Virtual Machine using Windows (Command Prompt), follow the steps in this FAQ:</li> </ul> <p>How to connect to a virtual machine via SSH from Windows 10 Command Prompt on 3Engines Cloud</p>"},{"location":"cloud/How-to-create-new-Linux-VM-in-OpenStack-Dashboard-Horizon-on-3Engines-Cloud.html.html","title":"How to create new Linux VM in OpenStack Dashboard Horizon on 3Engines Cloud\ud83d\udd17","text":"<p>Go to Project \u2192 Compute \u2192 Instances.</p> <p></p> <p>Click \u201cLaunch Instance\u201d.</p> <p>Insert the name of the Instance (eg. \u201cvm01\u201d) and click Next button.</p> <p></p> <p>Select Instance Boot Source (eg. \u201cImage\u201d), and choose desired image (eg. \u201cUbuntu 20.04 LTS\u201d) by clicking on arrow.</p> <p>Note</p> <p>If you do not need to have the system disk bigger than the size defined in a chosen flavor, we recommend setting \u201cCreate New Volume\u201d feature to \u201cNo\u201d state.</p> <p></p> <p>Choose Flavor (eg. eo1.xsmall).</p> <p></p> <p>Click \u201cNetworks\u201d and then choose desired networks.</p> <p></p> <p>Open \u201cSecurity Groups\u201d After that, choose \u201cdefault\u201d and \u201callow_ping_ssh_icmp_rdp\u201d groups.</p> <p></p> <p>Choose or generate SSH keypair How to create key pair in OpenStack Dashboard on 3Engines Cloud for your VM. Next, launch your instance by clicking on blue button.</p> <p></p> <p>You will see \u201cInstances\u201d menu with your newly created VM.</p> <p></p> <p>Open the drop-down menu and choose \u201cConsole\u201d.</p> <p></p> <p>Fig. 2 Click on the black terminal area (to activate access to the console). Type: eoconsole and hit Enter.\ud83d\udd17</p> <p></p> <p>Insert and retype new password.</p> <p></p> <p>Now you can type commands.</p> <p></p> <p>After you finish, type \u201cexit\u201d.</p> <p></p> <p>This will close the session.</p> <p>If you want to make your VM accessible from the Internet check How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud.</p>"},{"location":"cloud/How-to-fix-unresponsive-console-issue-on-3Engines-Cloud.html.html","title":"How to fix unresponsive console issue on 3Engines Cloud\ud83d\udd17","text":"<p>When you create a new virtual machine, the first thing you might want to do is to have a look at the console panel and check whether the instance has booted correctly.</p> <p>After opening up the console in OpenStack you might encounter this error:</p> <ul> <li>unresponsive grey screen</li> <li>document icon in the down-right corner which informs about the issue on client side</li> </ul> <p></p> <p>In this case:</p> <p>Check your firewall rules for port 6082 and assign \u201cincoming\u201d traffic the \u201callow\u201d rule.</p> <p>Connecting through RDP:</p> <p>Make sure that you included your floating IP in RDP rules on your computer. If you want to make these changes for more than one machine, then include our external network address: 185.178.84.0/22.</p>"},{"location":"cloud/How-to-generate-ec2-credentials-on-3Engines-Cloud.html.html","title":"How to generate and manage EC2 credentials on 3Engines Cloud\ud83d\udd17","text":"<p>EC2 credentials are used for accessing private S3 buckets on 3Engines Cloud cloud. This article covers how to generate and manage a pair of EC2 credentials so that you will be able to mount those buckets both</p> <ul> <li>on your virtual machines and</li> <li>on your local computers.</li> </ul> <p>Warning</p> <p>A pair of EC2 credentials usually provides access to secret data so share it only with trusted individuals.</p>"},{"location":"cloud/How-to-generate-ec2-credentials-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with access to Horizon interface: https://horizon.3Engines.com</p> <p>No. 2 OpenStack CLI client installed and configured</p> <p>You need to have the OpenStack CLI operational.</p> <p>First, it must be installed. You have several options, such as:</p>"},{"location":"cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-3Engines-Cloud.html.html","title":"How to generate or use Application Credentials via CLI on 3Engines Cloud\ud83d\udd17","text":"<p>You can authenticate your applications to keystone by creating application credentials for them. It is also possible to delegate a subset of role assignments on a project to an application credential, granting the same or restricted authorization to a project for the app.</p> <p>With application credentials, apps authenticate with the \u201capplication credential ID\u201d and a \u201csecret\u201d string which is not the user\u2019s password. Thanks to this, the user\u2019s password is not embedded in the application\u2019s configuration, which is especially important for users whose identities are managed by an external system such as LDAP or a single sign-on system.</p>"},{"location":"cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Authenticate</p> <p>Once you have installed this piece of software, you need to authenticate to start using it: How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication</p> <p>No. 3 OpenStackClient installed and available</p> <p>OpenStack is written in Python, it is recommended to use a dedicated virtual environment for the rest of this article.</p> Install GitBash on Windows How to install OpenStackClient GitBash for Windows on 3Engines Cloud. Install and run WSL (Linux under Windows) How to install OpenStackClient on Windows using Windows Subsystem for Linux on 3Engines Cloud OpenStack Hosting. Install OpenStackClient on Linux How to install OpenStackClient for Linux on 3Engines Cloud. <p>No. 4 jq installed and running</p> <p>You will need to have jq up and running. On Ubuntu, for example, the commands would be:</p> <pre><code>apt update &amp;&amp; apt upgrade -y # Get the latest packages list and upgrade installed packages\napt install jq -y # Install jq from the default Ubuntu repository\njq --version # Check the installed jq version\n</code></pre>"},{"location":"cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-3Engines-Cloud.html.html#step-1-cli-commands-for-application-credentials","title":"Step 1 CLI Commands for Application Credentials\ud83d\udd17","text":"<p>Command</p> <pre><code>openstack application credential\n</code></pre> <p>will list four commands available:</p> <pre><code>application credential create\napplication credential delete\napplication credential list\napplication credential show\n</code></pre> <p>To see the parameters for these commands, end them with --help, like this:</p> <pre><code>openstack application credential create --help\n</code></pre> <p>Amongst dozens of lines describing all the possible parameters, of particular interest are the commands to create a new credential:</p> <p></p> <p>Note</p> <p>The --help option will produce a vim-like output, so type q on the keyboard to get back to the usual terminal line.</p>"},{"location":"cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-3Engines-Cloud.html.html#step-2-the-simplest-way-to-create-a-new-application-credential","title":"Step 2 The Simplest Way to Create a New Application Credential\ud83d\udd17","text":"<p>The simplest way to generate a new application credential is just to define the name \u2013 the rest of the parameters will be defined automatically for you. The following command uses name cred2:</p> <pre><code>openstack application credential create cred2\n</code></pre> <p>The new application credential will be both formed and shown on the screen:</p> <p></p>"},{"location":"cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-3Engines-Cloud.html.html#step-3-using-all-parameters-to-create-a-new-application-credential","title":"Step 3 Using All Parameters to Create a New Application Credential\ud83d\udd17","text":"<p>Here is the meaning of related parameters:</p> <p>--secret</p> <p>Secret value to use for authentication. If omitted, will be generated automatically.</p> <p>--role</p> <p>Roles to authorize. If not specified, roles for the current user are all copied. Repeat this parameter to specify another role to become part of the credential. The example of roles is:</p> <pre><code>_member_ magnum_user load-balancer_member heat_stack_owner creator k8s_admin\n</code></pre> <p>Note</p> <p>Role _member_ is the most basic role and should always be present. Beware however, as in some variations of OpenStack it can be called member instead of _member_.</p> <p>--expiration</p> <p>Sets an expiration date. If not present, the application credential will not expire. The format is YYYY-mm-ddTHH:MM:SS, for instance:</p> <pre><code>--expiration $(date +\"%Y-11-%dT%H:%M:%S\")\n</code></pre> <p>That will yield the following date:</p> <pre><code>2022-11-09T13:27:01.000000\n</code></pre> <p>Parameters --unrestricted and --restricted</p> <p>By default, for security reasons, application credentials are forbidden from being used for creating additional application credentials or keystone trusts. If your application needs to be able to perform these actions, use parameter --unrestricted.</p> <p>Here is a complete example, using all of the available parameters to create a new application credential:</p> <pre><code>openstack application credential create foo-dev-member4 --role _member_ --expiration $(date +\"%Y-11-%dT%H:%M:%S\") --description \"Test application credentials\" --unrestricted -c id -c secret -f json | jq -r '\"application_credential_id: \\\"\" + .id + \"\\\"\", \"application_credential_secret: \\\"\" + .secret + \"\\\"\"'\n</code></pre> <p>The result is:</p> <p></p> <p>The name of the new application credential will be foo-dev-member4, will be used by role _member_ and so on. The part of the command starting with | jq -r prints only the values of credentials id and secret as you have to enter those value into the clouds.yml file in order to activate the recognition part of the process.</p>"},{"location":"cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-3Engines-Cloud.html.html#step-4-enter-id-and-secret-into-cloudsyml","title":"Step 4 Enter id and secret into clouds.yml\ud83d\udd17","text":"<p>You are now going to store the values of id and secret that the cloud has sent to you. Once stored, future openstack commands will use these value to authenticate to the cloud without using any kind of password.</p> <p>The place to store id and secret is a file called clouds.yml. It may reside on your local computer in one of these three locations:</p> Current directory <p>./clouds.yml</p> <p>You may want to create a special folder with mkdir command and paste clouds.yml into it.</p> <p>The current directory is searched first.</p> User configuration directory <p>$HOME/.config/openstack/clouds.yml</p> <p>The most common default location for individual users.</p> <p>Searched after the current directory.</p> System-wide configuration directory <p>/etc/openstack/clouds.yml</p> <p>Searches that location as the last resort.</p> <p>Usually you must be root to modify that file.</p> <p>The first clouds.yml file that is found will be used.</p> <p>Note</p> <p>The contents of the clouds.yml file will be in yaml format. It is customary to have the extension of yaml content be the exact same word yaml but here it is not yaml \u2013 it is yml instead.</p> <p>Let us create a new application credential called trial-member_creatornew.</p> <pre><code>openstack application credential create trial-member_creatornew --unrestricted -c id -c secret -f json | jq -r '\"application_credential_id: \\\"\" + .id + \"\\\"\", \"application_credential_secret: \\\"\" + .secret + \"\\\"\"'\n</code></pre> <p>This is the result:</p> <p></p> <p>Now create the clouds.yml file using your preferred editor of choice. Here it is nano:</p> <pre><code>nano $HOME/.config/openstack/clouds.yml\n</code></pre> <p>If not already existing, nano will create that file anew. Here are its contents:</p> <p>clouds.yml</p> <pre><code>clouds:\n trial-member_creatornew:\n auth_type: \"v3applicationcredential\"\n auth:\n auth_url: https://keystone.3Engines.com:5000/v3\n application_credential_id: \"a582edb593644106baeaa75fd706feb2\"\n application_credential_secret: \"mPKQort71xi7Ros7BHb1sG4753wvN_tmJMBd1aRBBGzgFZM7AoUkLWzCutQuh-dAyac86-rkikYqqYaT1_f0hA\"\n</code></pre> <p>Let us dissect that file line by line:</p> <ul> <li>clouds: is in plural as it is possible to define parameters of two or more clouds in the same file.</li> <li>trial-member_creatornew is the name of the application credential used in the previous credential create command.</li> <li>v3applicationcredential is the type of auth connection (it is always the same)</li> <li>auth start of auth parameters</li> <li>auth_url the address to call on the 3Engines Cloud OpenStack server (it always the same)</li> <li>application_credential_id the value from the previous call of credential create command</li> <li>credential create command the value from the previous call of credential create command</li> </ul> <p>This is how it should look in the editor:</p> <p></p> <p>Save it with Ctrl-X, then press Y and Enter.</p>"},{"location":"cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-3Engines-Cloud.html.html#step-5-gain-access-to-the-cloud-by-specifying-os_cloud-or-os-cloud","title":"Step 5 Gain access to the cloud by specifying OS_CLOUD or --os-cloud\ud83d\udd17","text":"<p>Application credentials give access to all of the activated regions and you have to specify which one to use. Specify it as a value of parameter --os-region, for instance, WAW3-2, WAW4-1 (or what else have you).</p> <p>In previous step you defined a clouds.yml file and it used to start with clouds:. The next line defined to which cloud will the parameters refer to, here it was trial-member_creatornew. By design, the clouds.yml file can contain information on several clouds \u2013 not only one \u2013 so it is necessary to distinguish to which cloud are you going to refer. There is a special parameter for that, called</p> <ul> <li>OS_CLOUD if used as systems parameter or</li> <li>--os-cloud if used from the command line.</li> </ul> <p>You define OS_CLOUD by directly assigning its value from the command line:</p> <pre><code>export OS_CLOUD=trial-member_creatornew\necho $OS_CLOUD\n</code></pre> <p>Open a new terminal window, execute the command above and then try to access the server:</p> <p></p> <p>It works.</p> <p>You can also use that parameter in the command line, like this:</p> <pre><code>openstack --os-cloud=trial-member_creatornew flavor list\n</code></pre> <p>It works as well:</p> <p></p> <p>You have to set up OS_CLOUD once per opening a new terminal window and then you can use openstack command without interpolating --os-cloud parameter all the time.</p> <p>If you had two or more clouds defined in the clouds.yml file, then using --os-cloud in the command line would be more flexible.</p> <p>In both cases, you can access the cloud without specifying the password, which was the goal in the first place.</p>"},{"location":"cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-3Engines-Cloud.html.html#environment-variable-based-storage","title":"Environment variable-based storage\ud83d\udd17","text":"<p>You can export them as environment variables. This increases security, especially in virtual machines. Also, automation tools can use them dynamically.</p> <p>To set them for the current session:</p> <pre><code>export OS_CLOUD=mycloud\nexport OS_CLIENT_ID=&lt;your-id&gt;\nexport OS_CLIENT_SECRET=&lt;your-secret&gt;\n</code></pre> <p>To make them persistent, add these lines to your ~/.bashrc or ~/.zshrc file:</p> <pre><code>echo 'export OS_CLOUD=mycloud' &gt;&gt; ~/.bashrc\necho 'export OS_CLIENT_ID=&lt;your-id&gt;' &gt;&gt; ~/.bashrc\necho 'export OS_CLIENT_SECRET=&lt;your-secret&gt;' &gt;&gt; ~/.bashrc\nsource ~/.bashrc\n</code></pre> <p>This method is useful for scripted deployments, temporary sessions, and when you don\u2019t want credentials stored in files.</p>"},{"location":"cloud/How-to-generate-or-use-Application-Credentials-via-CLI-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Here are some articles that use application credentials:</p> <p>How to install Rancher RKE2 Kubernetes on 3Engines Cloud</p> <p>Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on 3Engines Cloud</p> <p>OpenStack User Roles on 3Engines Cloud</p>"},{"location":"cloud/How-to-install-Python-virtualenv-or-virtualenvwrapper-on-3Engines-Cloud.html.html","title":"How to install Python virtualenv or virtualenvwrapper on 3Engines Cloud\ud83d\udd17","text":"<p>Virtualenv is a tool with which you are able to create isolated Python environments. It is mainly used to get rid of problems with dependencies and versions. It also does not include permission setup. Generally virtualenvwrapper is a kind of extension for virtualenv. It owns wrappers which are in charge of creating and deleting environments. It is useful supplement for our current subject.</p> <p>For purposes of this guide we will use virtual machine vm01 with operating system Ubuntu 22.04 LTS.</p> <p>Log in to your virtual machine and check python version (it should be preinstalled). Next step would be the installation of pip:</p> <pre><code>eouser@vm01:~$ python3 --version\n\nPython 3.10.12\n\neouser@vm01:~$ sudo apt install python3-pip\n</code></pre> <p>Confirm the pip3 installation by invoking:</p> <pre><code>eouser@vm01:~$ pip3 -V\n</code></pre> <p>If pip3 is installed, the pip3 -V command should give the output similar to this:</p> <pre><code>pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)\n</code></pre> <p>Install virtualenvwrapper via pip:</p> <pre><code>eouser@vm01:~$ pip3 install virtualenvwrapper\n</code></pre> <p>Create a new directory to store your virtual environments, for example:</p> <pre><code>mkdir .virtualenvs\n</code></pre> <p>Now we are going to modify .bashrc file by adding a row that will adjust every new virtual environment to use Python 3. We will point virtual environments to the directory we created above (.virtualenvs) and we will also point to the locations of the virtualenv and virtualenvwrapper. Open .bashrc file using editor, for example:</p> <pre><code>vim ~/.bashrc\n</code></pre> <p>Navigate to the bottom of the .bashrc file and add following rows:</p> <pre><code>#virtualenvwrapper settings:\nexport WORKON_HOME=$HOME/.virtualenvs\nexport VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3\nexport VIRTUALENVWRAPPER_VIRTUALENV=/home/eouser/.local/bin/virtualenv\nsource ./.local/bin/virtualenvwrapper.sh\n</code></pre> <p>After that save the .bashrc file.</p> <p>Now we have to reload the bashrc script, to do it execute the command:</p> <pre><code>source ~/.bashrc\n</code></pre> <p>If everything is set up properly, you should see following lines:</p> <pre><code>virtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/premkproject\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/postmkproject\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/initialize\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/premkvirtualenv\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/postmkvirtualenv\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/prermvirtualenv\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/postrmvirtualenv\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/predeactivate\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/postdeactivate\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/preactivate\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/postactivate\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/get_env_details\n</code></pre> <p>Now create your first virtual environment \u2018test\u2019 with \u2018mkvirtualenv\u2019 command:</p> <pre><code>mkvirtualenv test\n</code></pre> <p>The output should look like this:</p> <pre><code>created virtual environment CPython3.10.12.final.0-64 in 207ms\n creator CPython3Posix(dest=/home/eouser/.virtualenvs/test, clear=False, no_vcs_ignore=False, global=False)\n seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/eouser/.local/share/virtualenv)\n added seed packages: pip==23.2.1, setuptools==68.2.0, wheel==0.41.2\n activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/test/bin/predeactivate\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/test/bin/postdeactivate\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/test/bin/preactivate\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/test/bin/postactivate\nvirtualenvwrapper.user_scripts creating /home/eouser/.virtualenvs/test/bin/get_env_details\n</code></pre> <p>Now you should see name of your environment in the parenthesis before username, which means that you\u2019re working on your virtual environment.</p> <pre><code>(test) eouser@vm01:~$\n</code></pre> <p>If you would like to exit current environment, just type the \u2018deactivate\u2019 command:</p> <pre><code>(test) eouser@vm01:~$ deactivate\neouser@vm01:~$\n</code></pre> <p>To start working on virtual environment type \u2018workon\u2019 command:</p> <pre><code>eouser@vm01:~$ workon test\n(test) eouser@vm01:~$\n</code></pre> <p>To remove virtual environment invoke \u2018rmvirtualenv\u2019:</p> <pre><code>eouser@vm01:~$ rmvirtualenv test\nRemoving test...\n</code></pre> <p>To list all virtual environments use \u2018workon\u2019 or \u2018lsvirtualenv\u2019:</p> <pre><code>eouser@vm01:~$ workon\ntest-1\ntest-2\ntest-3\neouser@vm01:~$ lsvirtualenv\ntest-1\n======\n\ntest-2\n======\n\ntest-3\n======\n</code></pre>"},{"location":"cloud/How-to-start-a-VM-from-a-snapshot-on-3Engines-Cloud.html.html","title":"How to start a VM from a snapshot on 3Engines Cloud\ud83d\udd17","text":""},{"location":"cloud/How-to-start-a-VM-from-a-snapshot-on-3Engines-Cloud.html.html#a-volume-snapshot","title":"a) Volume Snapshot\ud83d\udd17","text":"<ol> <li>Choose the desired virtual machine (booted from Volume) and click on the \u201cCreate snapshot\u201d button.</li> </ol> <ol> <li>Name the snapshot. The decision is up to you to improve the personal navigation throughout image or volume snapshot repository. Confirm with the blue button.</li> </ol> <ol> <li>Go to Volumes tab and press Snapshots.</li> </ol> <ol> <li>Your volume snapshot is being stored in this place. To start a virtual machine from this type of snapshot, press on the arrow beside \u201cCreate Volume\u201d.</li> </ol> <ol> <li>Choose \u201cLaunch as Instance\u201d.</li> </ol> <ol> <li>Define Instance name and change bookmark to \u201cSource\u201d.</li> </ol> <ol> <li>Set Boot Source on \u201cVolume Snapshot\u201d and assign previously created snapshot by clicking on the arrow.</li> </ol> <ol> <li> <p>The rest of procedure is the same: How to create new Linux VM in OpenStack Dashboard Horizon on 3Engines Cloud.</p> </li> <li> <p>Newly created machine is visible in the Instances list.</p> </li> </ol> <p></p>"},{"location":"cloud/How-to-start-a-VM-from-a-snapshot-on-3Engines-Cloud.html.html#b-image-snapshot","title":"b) Image Snapshot\ud83d\udd17","text":"<p>1.Choose the desired virtual machine (booted from Glance image) and click on the \u201cCreate snapshot\u201d button.</p> <p></p> <ol> <li>Name the snapshot. The decision is up to you to improve the personal navigation throughout image or volume snapshot repository. Confirm with the blue button.</li> </ol> <p></p> <ol> <li>Go to Compute tab and press Images.</li> </ol> <p></p> <ol> <li>Scroll down and find your snapshot. Click on the \u201cLaunch\u201d.</li> </ol> <p></p> <p>Attention</p> <p>Image snapshot is in RAW format and its size is equivalent to the image that VM was booted from. In the \u201cImages\u201d you may also find symbolic links to the volume snapshots.(i.e. snapshot-virtual-machine-01 from a) scenario). This type of snapshot is in format QCOW2 and its size is set on 0 bytes.</p> <ol> <li>Name your virtual machine and go to \u201cSource.\u201d Set Boot Source on \u201cInstance snapshot\u201d and choose previously created Snapshot in RAW format.</li> </ol> <p></p> <ol> <li> <p>The rest of procedure is the same: How to create new Linux VM in OpenStack Dashboard Horizon on 3Engines Cloud.</p> </li> <li> <p>Virtual machine has been created.</p> </li> </ol> <p></p>"},{"location":"cloud/How-to-start-a-VM-from-instance-snapshot-using-Horizon-dashboard-on-3Engines-Cloud.html.html","title":"How to start a VM from instance snapshot using Horizon dashboard on 3Engines Cloud\ud83d\udd17","text":"<p>In this article, you will learn how to create a virtual machine from an instance snapshot using Horizon dashboard.</p>"},{"location":"cloud/How-to-start-a-VM-from-instance-snapshot-using-Horizon-dashboard-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Ephemeral storage vs. persistent storage</p> <p>Please see article Ephemeral vs Persistent storage option Create New Volume on 3Engines Cloud to understand the basic difference between ephemeral and persistent types of storage in OpenStack.</p> <p>No. 3 Instance snapshot</p> <p>You need to have an instance snapshot from which you are going to create your virtual machine. In this article, we will use an exemplary snapshot named my-instance-snapshot - here is how it can look like in section Images -&gt; Images of the Horizon dashboard:</p> <p></p> <p>The following articles contain information how to create such a snapshot:</p>"},{"location":"cloud/How-to-transfer-volumes-between-domains-and-projects-using-Horizon-dashboard-on-3Engines-Cloud.html.html","title":"How to transfer volumes between domains and projects using Horizon dashboard on 3Engines Cloud\ud83d\udd17","text":"<p>Volumes in OpenStack can be used to store data. They are visible to virtual machines like drives.</p> <p>Such a volume is usually available to just the project in which it was created. Transferring data stored on it between projects might take a long time, especially if such a volume contains lots of data, like, say, hundreds or thousands of gigabytes (or even more).</p> <p>This article covers changing the assignment of a volume to a project. This allows you to move a volume directly from one project (which we will call source project) to another (which we will call destination project) using the Horizon dashboard in a way that does not require you to physically transfer the data.</p> <p>The source project and destination project must both be on the same cloud (for example WAW3-2). They can (but don\u2019t have to) belong to different users from different domains and organizations.</p>"},{"location":"cloud/How-to-transfer-volumes-between-domains-and-projects-using-Horizon-dashboard-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Initializing transfer of volume</li> <li>Accepting transfer of volume</li> <li>Cancelling transfer of volume</li> </ul>"},{"location":"cloud/How-to-transfer-volumes-between-domains-and-projects-using-Horizon-dashboard-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com</p> <p>No. 2 Volume</p> <p>You need to have a volume which you want to migrate.</p> <p>Such a volume must not be connected to a virtual machine. It must have the following Status: Available.</p> <p>You can check the status of your volume in the Volumes -&gt; Volumes section of the Horizon dashboard. On screenshot below, that Status is marked with a green rectangle.</p> <p></p> <p>The following article includes information how to disconnect a volume from a virtual machine: How to move data volume between two VMs using OpenStack Horizon on 3Engines Cloud</p> <p>No. 4 Ability to perform operations on both the source project and the destination project</p> <p>For the transfer to be successful, you need to first initiate it from the source project and then accept it from the source project.</p> <p>If source and/or destination is not managed by you, you might need to get appropriate permission to perform such an operation.</p> <p>To access each of these projects directly (if possible), depending on the circumstances you can either login to appropriate account or use the project switcher found in the top of the Horizon dashboard:</p> <p></p> <p>If you don\u2019t have direct access to any of these projects, you probably can request their members to execute commands mentioned in this article.</p>"},{"location":"cloud/How-to-transfer-volumes-between-domains-and-projects-using-Horizon-dashboard-on-3Engines-Cloud.html.html#step-1-initializing-transfer-of-volume","title":"Step 1: Initializing transfer of volume\ud83d\udd17","text":"<p>Perform this step in the source project.</p> <p>Navigate to the section Volumes -&gt; Volumes of the Horizon dashboard. Confirm if the volume which you want to migrate has the following Status: Available. In example below, this requirement is met - see value marked with a blue rectangle.</p> <p></p> <p>If your volume has a different status, do not continue this workflow and check Prerequisite No. 2.</p> <p>In the row which represents the volume you want to migrate, from drop-down menu found in Actions column choose Create Transfer:</p> <p></p> <p>You should see the following window:</p> <p></p> <p>Enter a descriptive name to text field Transfer Name and click Create Volume Transfer.</p> <p>You should now see the following window:</p> <p></p> <p>Write somewhere down Transfer ID and Authorization Key. You can also use button Download transfer credentials to get these credentials as a plain text file.</p> <p>Warning</p> <p>Since these credentials allow somebody to capture the volume while the transfer is active, protect them and share them only with individuals for whom they are meant!</p> <p>Once you have done it, you can click Close to close the window.</p> <p>Your volume should now have the following Status: Awaiting Transfer.</p> <p></p> <p>Note that after initializing the transfer, the volume cannot be connected to any virtual machine until the transfer is accepted or cancelled. To learn how to cancel the transfer (if you, say, accidentally chose the wrong volume), see section Cancelling transfer of volume near the end of the article.</p>"},{"location":"cloud/How-to-transfer-volumes-between-domains-and-projects-using-Horizon-dashboard-on-3Engines-Cloud.html.html#step-2-accepting-transfer-of-volume","title":"Step 2: Accepting transfer of volume\ud83d\udd17","text":"<p>Perform this step in the source project.</p> <p>Navigate to section Volumes -&gt; Volumes of the Horizon dashboard. Click Accept Transfer:</p> <p></p> <p>You should see the following window:</p> <p></p> <p>Enter the Transfer ID and the Authorization Key you obtained while following Step 1 above to appropriate text fields.</p> <p>Click Accept Volume Transfer.</p> <p>The volume should now be visible on the list:</p> <p></p>"},{"location":"cloud/How-to-transfer-volumes-between-domains-and-projects-using-Horizon-dashboard-on-3Engines-Cloud.html.html#cancelling-transfer-of-volume","title":"Cancelling transfer of volume\ud83d\udd17","text":"<p>If you, say, accidentally initiated transfer for a wrong volume and nobody accepts that transfer, it can be cancelled.</p> <p>To do that, navigate to section Volumes -&gt; Volumes of the Horizon dashboard:</p> <p></p> <p>In this example, let\u2019s assume that we mistakenly created a transfer for volume my-volume. Because of that, it has the following status: Awaiting Transfer. Such a volume cannot be connected to an instance.</p> <p>To cancel transfer, simply click Cancel Transfer in the row representing your volume, column Actions:</p> <p></p> <p>You will be asked for confirmation:</p> <p></p> <p>Click Cancel Transfer.</p> <p>If the operation was successful, you should get the following message in the top right corner of the Horizon dashboard:</p> <p></p> <p>Note that this message might be confusing if you read only its first line. It does not tell about the removal of the volume, but about cancelling of volume transfer.</p> <p>After cancelling, your volume should now once again have status Available:</p> <p></p>"},{"location":"cloud/How-to-transfer-volumes-between-domains-and-projects-using-Horizon-dashboard-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Now that the volume has been transferred, you might want to connect it to a virtual machine. This article includes information how to do that: How to move data volume between two VMs using OpenStack Horizon on 3Engines Cloud</p> <p>The workflow described in this article can also be done using the OpenStack CLI. Learn more here: How to transfer volumes between domains and projects using OpenStack CLI client on 3Engines Cloud</p>"},{"location":"cloud/How-to-upload-custom-image-to-3Engines-Cloud-cloud-using-OpenStack-Horizon-dashboard.html.html","title":"How to upload custom image to 3Engines Cloud cloud using OpenStack Horizon dashboard\ud83d\udd17","text":"<p>In this tutorial, you will upload custom image stored on your local computer to 3Engines Cloud cloud, using the Horizon Dashboard. The uploaded image will be available within your project alongside default images from 3Engines Cloud cloud and you will be able to create virtual machines using it.</p>"},{"location":"cloud/How-to-upload-custom-image-to-3Engines-Cloud-cloud-using-OpenStack-Horizon-dashboard.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>How to check for the presence of image in 3Engines Cloud cloud</li> <li>How different images might behave</li> <li>How to upload an image using Horizon dashboard</li> <li>Example: how to upload image for Debian 11</li> <li>What happens if you lose Internet connection during upload</li> </ul>"},{"location":"cloud/How-to-upload-custom-image-to-3Engines-Cloud-cloud-using-OpenStack-Horizon-dashboard.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Custom image you wish to upload</p> <p>You need to have the image you wish to upload. It can be one of the following image formats:</p> aki ami ari iso qcow2 raw vdi vhd vhdx vmdk <p>The following container formats are supported:</p> aki ami ari bare docker ova ovf <p>For the explanation of these formats, see article What Image Formats are Available in OpenStack 3Engines Cloud cloud.</p> <p>No. 3 Uploaded public SSH key</p> <p>If the image you wish to upload requires you to attach an SSH public key while creating the virtual machine, the key will need to be uploaded to 3Engines Cloud cloud. One of these articles should help:</p>"},{"location":"cloud/How-to-upload-your-custom-image-using-OpenStack-CLI-on-3Engines-Cloud.html.html","title":"How to upload your custom image using OpenStack CLI on 3Engines Cloud\ud83d\udd17","text":"<p>In this tutorial, you will upload custom image stored on your local computer to 3Engines Cloud cloud, using the OpenStack CLI client. The uploaded image will be available within your project alongside default images from 3Engines Cloud cloud and you will be able to create virtual machines using it.</p>"},{"location":"cloud/How-to-upload-your-custom-image-using-OpenStack-CLI-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>How to check for the presence of the image in your OpenStack cloud</li> <li>How different images might behave</li> <li>How to upload the image using only CLI commands</li> <li>Example: how to upload image for Debian 11</li> <li>What happens if you lose Internet connection during upload</li> </ul>"},{"location":"cloud/How-to-upload-your-custom-image-using-OpenStack-CLI-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 OpenStack CLI configured</p> <p>You need to have the OpenStack CLI client configured and operational. See How to install OpenStackClient for Linux on 3Engines Cloud. You can test whether your OpenStack CLI is properly activated by executing the openstack server list command mentioned in the end of that article - it should return the list of your virtual machines.</p> <p>No. 3 Custom image you wish to upload</p> <p>You need to have the image you wish to upload. It can be one of the following image formats:</p> aki ami ari iso qcow2 raw vdi vhd vhdx vmdk <p>The following container formats are supported:</p> aki ami ari bare docker ova ovf <p>For the explanation of these formats, see article What Image Formats are Available in OpenStack 3Engines Cloud cloud.</p> <p>No. 4 Uploaded public SSH key</p> <p>If the image you wish to upload requires you to attach an SSH public key while creating the virtual machine, the key will need to be uploaded to 3Engines Cloud cloud. One of these articles should help:</p>"},{"location":"cloud/How-to-use-Docker-on-3Engines-Cloud.html.html","title":"How to install and use Docker on Ubuntu 24.04\ud83d\udd17","text":"<p>This guide will walk you through</p> <ul> <li>the installation of Docker on Ubuntu 24.04, and</li> <li>basic commands to get you started with Docker containers.</li> </ul>"},{"location":"cloud/How-to-use-Docker-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What we are going to cover","text":""},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html","title":"How to Use GUI in Linux VM on 3Engines Cloud and access it From Local Linux Computer\ud83d\udd17","text":"<p>In this article you will learn how to use GUI (graphical user interface) on a Linux virtual machine running on 3Engines Cloud cloud.</p> <p>For this purpose, you will install and use X2Go on your local Linux computer.</p> <p>This article covers the installation of two desktop environments: MATE and XFCE. Choose the one that suits you best.</p>"},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Installing X2Go client</li> <li>Installing X2Go server and desktop environment (MATE or XFCE)</li> <li>Connecting to your virtual machine using X2Go client</li> <li>Basic troubleshooting</li> </ul>"},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Linux installed on your local computer</p> <p>You need to have a local computer with Linux installed. This article was written for such computers running Ubuntu Desktop 22.04. If you are running a different Linux distribution, adjust the instructions from this article accordingly.</p> <p>No. 3 Linux virtual machine</p> <p>You need a Linux virtual machine running on 3Engines Cloud cloud. You need to able to access it via SSH. The following article explains how to create one such virtual machine:</p> <p>How to create a Linux VM and access it from Linux command line on 3Engines Cloud</p> <p>This article was written for virtual machines using a default Ubuntu 20.04 image on cloud. Adjust the instructions from this article accordingly if your virtual machine has a different Linux distribution.</p>"},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#step-1-install-x2go-client","title":"Step 1: Install X2Go client\ud83d\udd17","text":"<p>Open the terminal on your local Linux computer and update your packages by executing the following command:</p> <pre><code>sudo apt update &amp;&amp; sudo apt upgrade\n</code></pre> <p>Now, install the x2goclient package:</p> <pre><code>sudo apt install x2goclient\n</code></pre>"},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#step-2-install-the-desktop-environment-on-your-vm","title":"Step 2: Install the desktop environment on your VM\ud83d\udd17","text":""},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#method-1-installing-mate","title":"Method 1: Installing MATE\ud83d\udd17","text":"<p>Connect to your VM using SSH. Update your packages there:</p> <pre><code>sudo apt update &amp;&amp; sudo apt upgrade\n</code></pre> <p>Now, install the MATE desktop environment and X2Go server:</p> <pre><code>sudo apt install x2goserver ubuntu-mate-desktop mate-applet-brisk-menu\n</code></pre> <p>You can add other packages to that command as needed.</p> <p>During the installation you will be asked to choose the keyboard layout. Choose the one that suits you best using the arrow keys and Enter.</p> <p>Once the installation is completed, reboot your VM by executing the following command:</p> <pre><code>sudo reboot\n</code></pre>"},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#method-2-installing-xfce","title":"Method 2: Installing XFCE\ud83d\udd17","text":"<p>Connect to your VM using SSH. Update your packages there:</p> <pre><code>sudo apt update &amp;&amp; sudo apt upgrade\n</code></pre> <p>Now, install the XFCE desktop environment, the terminal emulator and X2Go server:</p> <pre><code>sudo apt install x2goserver xfce4 xfce4-terminal\n</code></pre> <p>You can add other packages to that command as needed.</p> <p>During the installation you will be asked to choose the keyboard layout. Choose the one that suits you best using the arrow keys and Enter.</p> <p>Once the installation is completed, reboot your VM by executing the following command:</p> <pre><code>sudo reboot\n</code></pre>"},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#step-3-connect-to-your-vm-using-x2go","title":"Step 3: Connect to your VM using X2Go\ud83d\udd17","text":"<p>Open X2Go on your local Linux computer. If you haven\u2019t configured any session yet, you should get the window used for creating one:</p> <p></p> <p>If you didn\u2019t get such window, click the New session button on the X2Go toolbar.</p> <p>Enter the name of your choice for your session in the Session name: text box. In this example, the name cloud-session will be used.</p> <p>In the text box Host: enter the floating IP of your VM.</p> <p>In the Login: button enter eouser.</p> <p>Click the folder icon next to the Use RSA/DSA key for sh connection: text field. A file selector should appear. Choose the SSH private file you use for connecting to your VM via SSH.</p> <p>From the drop-down menu in the Session type section choose the desktop environment you installed, for example MATE or XFCE.</p> <p>In the Input/Output tab choose the screen resolution that suits you best in the Display section. Here you can also choose whether you wish to share clipboard with the remote VM in the Clipboard mode section.</p> <p>Click OK. The window should close.</p> <p>The session you created should now be visible in the X2Go client:</p> <p></p> <p>Click the name of your session to connect to your VM. Wait up to a minute until your connection is established. You should now see your desktop environment.</p> <p>If you chose MATE, it should look like this:</p> <p></p> <p>If you, however, chose XFCE, it should look like this:</p> <p></p>"},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#troubleshooting-using-the-terminal-emulator-on-xfce","title":"Troubleshooting - Using the terminal emulator on XFCE\ud83d\udd17","text":"<p>If the button Terminal Emulator on your taskbar does not launch your terminal, click the Applications menu in the upper left corner of the screen:</p> <p></p> <p>Choose Settings -&gt; Preferred Applications:</p> <p></p> <p>You should get the following window:</p> <p></p> <p>Open the tab Utilities. The window should now look like this:</p> <p></p> <p>From the drop-down menu in the Terminal Emulator section choose Xfce Terminal.</p> <p>Click Close.</p> <p>The button should now launch the terminal emulator correctly.</p>"},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#troubleshooting-keyboard-layout","title":"Troubleshooting - Keyboard layout\ud83d\udd17","text":"<p>If you discover that the system does not use the keyboard layout you chose during the installation of the desktop environment, you will need to set it manually. The process differs depending on the desktop environment you chose.</p>"},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#mate","title":"MATE\ud83d\udd17","text":"<p>Click the Menu in the upper left corner of the screen:</p> <p></p> <p>From the Preferences section choose Keyboard:</p> <p></p> <p>You should get the following window:</p> <p></p> <p>Navigate to the Layouts tab:</p> <p></p> <p>Here, you can add or remove keyboard layouts depending on your needs.</p>"},{"location":"cloud/How-to-use-GUI-in-Linux-VM-on-3Engines-Cloud-and-access-it-from-local-Linux-computer.html.html#xfce","title":"XFCE\ud83d\udd17","text":"<p>From the Applications menu in the upper left corner of the screen choose Settings -&gt; Keyboard. You should get the following window:</p> <p></p> <p>Go to the Layout tab:</p> <p></p> <p>Unselect the Use system defaults check box. You can now add or remove the keyboard layouts depending on your needs.</p>"},{"location":"cloud/How-to-use-Security-Groups-in-Horizon-on-3Engines-Cloud.html.html","title":"How to use Security Groups in Horizon on 3Engines Cloud\ud83d\udd17","text":"<p>Security groups in OpenStack are used to filter the Internet traffic coming to and from your virtual machines. They consist of security rules and can be attached to your virtual machines during and after the creation of the machines.</p> <p>By default, each instance has a rule which blocks all incoming Internet traffic and allows all outgoing traffic. To modify those settings, you can apply other security groups to it.</p>"},{"location":"cloud/How-to-use-Security-Groups-in-Horizon-on-3Engines-Cloud.html.html#viewing-the-security-groups","title":"Viewing the security groups\ud83d\udd17","text":"<p>To check your current security groups, please follow these steps:</p> <p>Log in to your 3Engines Cloud account: https://horizon.3Engines.com.</p> <p>In the panel on the left choose Network and then Security Groups.</p> <p>You will see the list of your security groups there. The following groups should always be present:</p> <ul> <li>default which blocks all incoming traffic and allows all outgoing traffic.</li> <li>allow_ping_ssh_rdp which allows incoming ping, SSH (port 22) and RDP (port 3389) connections. This group is not attached to your VMs by default.</li> </ul> <p></p>"},{"location":"cloud/How-to-use-Security-Groups-in-Horizon-on-3Engines-Cloud.html.html#creating-a-new-security-group","title":"Creating a new security group\ud83d\udd17","text":"<p>In order to create a new security group, please follow these steps:</p> <p>Click the Create Security Group button.</p> <p>The following window should appear:</p> <p></p> <p>Give your security group a recognizable name in the Name text field. Optionally, you can also provide a description of it in the Description text field.</p> <p>Confirm your choices by clicking the Create Security Group button.</p> <p>You should now be taken to the screen which allows you to modify the security rules of that security group - in our case the group is called my-group:</p> <p></p> <p>Note</p> <p>If you want to access that screen later, you can click the Manage Rules button next to your security group in the Security Groups screen.</p> <p>By default, your new security group should contain two rules seen on the screenshot above - the first one allows all outgoing traffic on IPv4 and the second one allows all outgoing traffic on IPv6.</p>"},{"location":"cloud/How-to-use-Security-Groups-in-Horizon-on-3Engines-Cloud.html.html#adding-security-rules-to-a-security-group","title":"Adding security rules to a security group\ud83d\udd17","text":"<p>In the Manage Security Rules screen that you entered in the previous step, click the Add Rule button.</p> <p>The following form will appear. In it you can define the security rule:</p> <p></p> <p>The drop-down list Rule allows you to choose the type of rule. These types, along with the available options for them, are explained below. Once you have finished, click Add to finish creating your rule.</p> <p>Custom TCP Rule</p> <p>This type of rule allows you to create a custom rule for the TCP protocol. This protocol is commonly used, amongst other things, for interacting with websites.</p> <p>You can optionally provide the description of that rule in the Description text field.</p> <p>The drop-down list Direction allows you to choose whether this rule should apply to incoming (Ingress) or outgoing (Egress) traffic.</p> <p>The drop-down list Port has the following options:</p> <ul> <li>If you choose Port, you will get the text field Port in which you can input one port for which this rule will apply.</li> <li>If you choose Port Range, you will be able to enter the first port in range in the text field From Port and the last port in the text field To Port.</li> <li>If you choose All ports, this rule will apply to all ports.</li> </ul> <p>The drop-down list Remote has the following options:</p> <ul> <li>If you choose CIDR, you will get the text field CIDR which allows you to input the IP address block for which this rule will apply using the CIDR notation, for example: 64.225.135.119/32. This example means that only the 64.225.135.119 IP address is included in this rule. If this notation was as follows: 64.225.135.119/8, then this rule would apply to all IP addresses that have the first digit 64.</li> <li>If you choose Security Group, you will get the drop-down list Security Group - the machines which are in that security group will be able to access your virtual machine. You will also get the drop-down list Ether Type from which you can choose IPv4 or IPv6. You should almost always use IPv4 for your network operations (apart from a few rare instances in which you know that IPv6 is needed).</li> </ul> <p>Custom UDP Rule</p> <p>This type of rule has the same options as Custom TCP Rule, but involves the UDP protocol. It is a protocol similar to TCP, but the main difference is that it does not provide session control.</p> <p>Custom ICMP rule</p> <p>This type of rule is used for ICMP. This protocol is used, among others, for traceroute and ping. It has the same options as the Custom TCP Rule, but instead of ports, it uses the ICMP types (which you should put in the Type text field) and ICMP codes (which should be put in the Code text field).</p> <p>Other Protocol</p> <p>This option is for protocols like for example SIP (protocol used for Internet telephony).</p> <p>All ICMP, All TCP, All UDP</p> <p>These options apply to all ports of ICMP, TCP and UPD, respectively.</p> <p>Other options</p> <p>The drop-down list Rule also contains templates for commonly used services like DNS (Domain Name Services), HTTP (Hypertext Transfer Protocol) or SMTP (Simple Mail Transfer Protocol). If you choose one of them, you only have to provide the information about the Remote - CIDR or Security Group. The explanation for those options is in the Custom TCP Rule section.</p>"},{"location":"cloud/How-to-use-Security-Groups-in-Horizon-on-3Engines-Cloud.html.html#adding-a-security-group-to-your-vm","title":"Adding a Security Group to your VM\ud83d\udd17","text":"<p>You can apply your security group to your VM either during or after creating it.</p>"},{"location":"cloud/How-to-use-Security-Groups-in-Horizon-on-3Engines-Cloud.html.html#during-its-creation","title":"During its creation\ud83d\udd17","text":"<p>During the process of creating your virtual machine you can add security groups to it. This happens during the Security Groups step:</p> <p></p> <p>You can add security groups to your VM by using the \u2191 button an remove them using the \u2193 button - the same as in the Source or Network steps. In this case, we have added the my-group group to the VM:</p> <p></p>"},{"location":"cloud/How-to-use-Security-Groups-in-Horizon-on-3Engines-Cloud.html.html#after-its-creation","title":"After its creation\ud83d\udd17","text":"<p>Go to Compute &gt; Instances. Click the drop-down menu in the row containing information about the to which you wish to apply your rule (column Actions). Select Edit Security Groups. You should see the window similar to this:</p> <p></p> <p>In the left section you can see available security groups and in the right section you can see security groups already attached to your VM. To apply a security group to your VM, click the + button next to that group and to remove it, click the - button next to it.</p>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html","title":"OpenStack User Roles on 3Engines Cloud\ud83d\udd17","text":"<p>A user role in OpenStack cloud is a set of permissions that govern how members of specific groups interact with system resources, their access scope, and capabilities.</p> <p>This guide simplifies OpenStack roles for casual users of 3Engines Cloud VMs. It focuses on practical use cases and commonly required roles.</p>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Frequently used user roles</li> </ul> <ul> <li>Common user roles</li> <li>Roles for Kubernetes users</li> <li>Roles for Load Balancer users</li> </ul> <ul> <li>Examples of using user roles</li> </ul> <ul> <li>Using user roles while creating application credential in Horizon</li> <li>Using user roles while creating application credential via the CLI</li> <li>Using user roles while creating a new project</li> <li>Using member role only while creating a new user</li> </ul> <ul> <li>Dictionary of other roles</li> </ul>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>1. Account</p> <p>You need a 3Engines Cloud hosting account with Horizon access: https://horizon.3Engines.com.</p> <p>Also see:</p> <p>What is an OpenStack project on 3Engines Cloud</p> <p>What is an OpenStack domain on 3Engines Cloud</p> <p>How to generate or use Application Credentials via CLI on 3Engines Cloud</p> <p>2. Familiarity with OpenStack Commands</p> <p>Ensure you know the following OpenStack commands:</p> openstack The primary CLI for interacting with OpenStack services. How to install OpenStackClient for Linux on 3Engines Cloud kubectl <p>CLI for Kubernetes clusters. Example article:</p> <p>How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#frequently-used-user-roles","title":"Frequently used user roles\ud83d\udd17","text":""},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#common-user-roles","title":"Common user roles\ud83d\udd17","text":"member <p>Grants standard access to project resources.</p> <p>Note</p> <p>Older OpenStack versions may use _member_. If both member and _member_ exist, choose member.</p> <ul> <li>Horizon: Project -&gt; Overview</li> <li>CLI: openstack server list, openstack project list</li> </ul> observer <p>Read-only access for monitoring and auditing resources. Suitable for third-party tools like Prometheus or Grafana.</p> <ul> <li>Horizon: Project -&gt; Overview</li> <li>CLI: openstack server show, openstack project show</li> </ul> reader <p>Read-only access with slightly broader permissions than observer. Ideal for monitoring and analytics tools requiring detailed resource data.</p> <ul> <li>Horizon: Project -&gt; Overview</li> <li>CLI: openstack server list, openstack project list</li> </ul>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#roles-for-kubernetes-users","title":"Roles for Kubernetes users\ud83d\udd17","text":"k8s_admin <p>Administrative access to manage Kubernetes clusters and resources.</p> <ul> <li>Horizon: Kubernetes -&gt; Clusters</li> <li>CLI: kubectl create deployment, kubectl get pods</li> </ul> k8s_developer <p>For developers deploying applications within Kubernetes.</p> <ul> <li>Horizon: Kubernetes -&gt; Workloads</li> <li>CLI: kubectl create, kubectl apply</li> </ul> k8s_viewer <p>Read-only access to monitor Kubernetes resources.</p> <ul> <li>Horizon: Kubernetes -&gt; Overview</li> <li>CLI: kubectl get pods, kubectl describe pod</li> </ul>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#roles-for-load-balancer-users","title":"Roles for Load Balancer users\ud83d\udd17","text":"load-balancer_member <p>Grants access to deploy applications behind load balancers.</p> <ul> <li>Horizon: Network -&gt; Load Balancers</li> <li>CLI: openstack loadbalancer member create, openstack loadbalancer member list</li> </ul> load-balancer_observer <p>Read-only access to monitor load balancer configurations.</p> <ul> <li>Horizon: Network -&gt; Load Balancers</li> <li>CLI: openstack loadbalancer show, openstack loadbalancer stats show</li> </ul>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#how-to-view-roles-in-horizon","title":"How to View Roles in Horizon\ud83d\udd17","text":"<p>You can view roles in Horizon by navigating to Identity -&gt; Roles.</p> ../_images/user-roles-list-2.png ../_images/user-roles-list-1.png <p>Assigning multiple roles is best done during project creation rather than user creation.</p> <p></p>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#examples-of-using-user-roles","title":"Examples of using user roles\ud83d\udd17","text":"<p>The following articles, as one of many steps, describe how to assign a role to the new project, credential, user or group.</p>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#using-user-roles-while-creating-application-credential-in-horizon","title":"Using user roles while creating application credential in Horizon\ud83d\udd17","text":"<p>Normally, you access the cloud via user credentials, which may be one- or two-factor credentials. OpenStack provides a more direct procedure of gaining access to cloud with application credential and you can create a credential with several user roles.</p> <p>That S3 article selects user roles when creating an application credential, through Horizon:</p> <p>/s3/Create-S3-bucket-and-use-it-in-Sentinel-Hub-requests</p> <p></p>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#using-user-roles-while-creating-application-credential-via-the-cli","title":"Using user roles while creating application credential via the CLI\ud83d\udd17","text":"<p>This is the main article about application credentials; it is mostly using CLI:</p> <p>How to generate or use Application Credentials via CLI on 3Engines Cloud</p> <p>Here is how to specify user roles through CLI parameters:</p> <p></p>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#using-user-roles-while-creating-a-new-project","title":"Using user roles while creating a new project\ud83d\udd17","text":"<p>In article How to Create and Configure New Openstack Project Through Horizon on 3Engines Cloud Cloud we use command Project Members to define which users to include into the project:</p> <p></p> <p>You would then continue by defining the roles for each user in the project:</p> <p></p> <p>See this Rancher article, How to install Rancher RKE2 Kubernetes on 3Engines Cloud. Then, in Preparation step 1, a new project is created, with the following user roles:</p> <ul> <li>load-balancer_member,</li> <li>member and</li> <li>creator.</li> </ul> <p></p>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#using-member-role-only-while-creating-a-new-user","title":"Using member role only while creating a new user\ud83d\udd17","text":"<p>In SLURM article, we first create a new OpenStack Keystone user, with the role of member.</p> <p>/cuttingedge/Sample-SLURM-Cluster-on-3Engines-Cloud-Cloud-with-ElastiCluster</p> <p></p> <p>That user can login to Horizon and use project resources together with other users which are defined in a similar way.</p>"},{"location":"cloud/OpenStack-user-roles-on-3Engines-Cloud.html.html#dictionary-of-other-roles","title":"Dictionary of other roles\ud83d\udd17","text":"admin Grants unrestricted access to all resources and configurations in the system. Typically reserved for superusers or administrators. project_admin Provides administrative privileges within a specific project, allowing users to manage resources, members, and settings at the project level. network_admin Focused on managing networking resources, including creating networks, subnets, and routers, as well as assigning IPs. storage_admin Offers full control over storage resources, such as creating, modifying, and deleting volumes and snapshots. database_admin Designed for managing database resources, including provisioning, scaling, and backup configurations. audit_viewer A read-only role dedicated to viewing logs, system events, and audit trails for compliance and monitoring purposes. compute_operator Allows management of compute resources, such as starting, stopping, and resizing virtual machines, but without administrative privileges. volume_user Enables users to attach and detach volumes to/from instances and perform basic volume management tasks. image_creator Provides permissions to upload, manage, and delete virtual machine images in the image repository. security_group_manager Focused on managing security groups and rules, including creating and updating firewall configurations. dns_admin Grants administrative privileges over DNS zones, records, and configurations. keypair_user A role for managing SSH key pairs used for authenticating access to virtual machines. heat_stack_owner Enables users to create and manage orchestration stacks using Heat templates, including scaling and updating stacks. backup_admin Offers full control over backup operations, such as scheduling backups, restoring data, and managing backup repositories. report_viewer A read-only role that provides access to reports and analytics dashboards without the ability to modify data. api_user Designed for programmatic access to the system via APIs, allowing automation and integration tasks. support_role A limited-access role for customer support agents, enabling them to troubleshoot issues without full system access. custom_role (generic) Represents a user-defined role tailored for specific permissions or organizational policies. Refer to system administrators for details on its scope."},{"location":"cloud/Resizing-a-virtual-machine-using-OpenStack-Horizon-on-3Engines-Cloud.html.html","title":"Resizing a virtual machine using OpenStack Horizon on 3Engines Cloud\ud83d\udd17","text":""},{"location":"cloud/Resizing-a-virtual-machine-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#introduction","title":"Introduction\ud83d\udd17","text":"<p>When creating a new virtual machine under OpenStack, one of the options you choose is the flavor. A flavor is a predefined combination of CPU, memory and disk size and there usually is a number of such flavors for you to choose from.</p> <p>After the instance is spawned, it is possible to change one flavor for another, and that process is called resizing. You might want to resize an already existing VM in order to:</p> <ul> <li>increase (or decrease) the number of CPUs used,</li> <li>use more RAM to prevent crashes or enable swapping,</li> <li>add larger storage to avoid running out of disk space,</li> <li>seamlessly transition from testing to production environment,</li> <li>change application workload byt scaling the VM up or down.</li> </ul> <p>In this article, we are going to resize VMs using commands in OpenStack Horizon.</p>"},{"location":"cloud/Resizing-a-virtual-machine-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://portal.3Engines.com/.</p> <p>No. 2 How to create a new VM</p> <p>If you are a normal user of 3Engines Cloud hosting, you will have all prerogatives needed to resize the VM. Make sure that the VM you are about to resize belongs to a project you have access to. Here are the basics of creating a Linux VM in Horizon:</p> <p>How to create a Linux VM and access it from Linux command line on 3Engines Cloud</p> <p>How to create a Linux VM and access it from Windows desktop on 3Engines Cloud</p> <p>No. 3 Awareness of existing quotas and flavors limits</p> <p>For general introduction to quotas and flavors, see Dashboard Overview \u2013 Project Quotas And Flavors Limits on 3Engines Cloud.</p> <p>Also:</p> <ul> <li>The VM you want to resize is in an active or shut down state.</li> <li>A flavor with the desired resource configuration exists.</li> <li>Adequate resources are available in your OpenStack environment to accommodate the resize.</li> </ul>"},{"location":"cloud/Resizing-a-virtual-machine-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#creating-a-new-vm","title":"Creating a new VM\ud83d\udd17","text":"<p>To illustrate the commands in this article, let us create a new VM in order to start with a clean slate. (It goes without saying that you can practice with any of the already existing VMs in your account.)</p> <p>Use Prerequisite No. 2 to create a new VM and let it be called Resizing. Here is a typical list of flavors you might see:</p> <p></p> <p>For the sake of this article, let us choose a \u201cmiddle\u201d flavor \u2013 not too large and not to small to start with. Let it be eo2a.large.</p> <p></p> <p>Finish the process of creating a new VM and let it spawn:</p> <p></p> <p>Let us now resize the VM called Resizing.</p>"},{"location":"cloud/Resizing-a-virtual-machine-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#steps-to-resize-the-vm","title":"Steps to Resize the VM\ud83d\udd17","text":"<p>Locate the VM by using Horizon commands Compute -&gt; Instances.</p> <p>Click the dropdown arrow next to the VM and select Resize Instance.</p> <p></p> <p>You get form Resize Instance on screen:</p> <p></p> <p>Assuming you wanted to scale up the VM, you could decide upon eo2.xlarge. Let us compare the two flavors:</p> Flavor VCPUs RAM Total Disk Root Disk eo2a.large 2 7.45 GB 32 GB 32 GB eo2.xlarge 4 16 GB 64 GB 64 GB <p>So, select eo2.xlarge as the new flavor. This screen shows its parameters:</p> <p></p>"},{"location":"cloud/Resizing-a-virtual-machine-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#advanced-options","title":"Advanced Options\ud83d\udd17","text":"<p>Advanced Options tab contains two further options for resizing the instance.</p> <p></p> Disk Partition Whether the entire disc is a single partition and automatically resizes. Options are Automatic and Manual Server Group <p>Here you select server group to which the instance can belong after resizing. Even if you never manually created a server group, they may be present as a consequence of creating Kubernete clusters, or using parameters for group affinity.</p> <p>The list can be quite long:</p> <p></p>"},{"location":"cloud/Resizing-a-virtual-machine-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#resize-the-vm","title":"Resize the VM\ud83d\udd17","text":"<p>Anyways, click on Resize to proceed with the resizing of the VM.</p> <p></p> <p>In Status column, there will be message Confirm or Revert Resize/Migrate. It means the system is waiting for you to decide what to do next. To confirm the resizing/migrating process, click on button Confirm Resize/Migrate in the Actions column.</p> <p>The resizing process will finish within a couple of seconds and the VM will be in Status Active.</p> <p>If you encounter issues, you can choose Revert Resize to return the VM to its previous state. This option is, however, only available before Resize/Migration Confirmation.</p> <p>Or, if the resizing is finished, you can again use option Resize instance and choose the flavor from which you started (eo2a.large in this case). This process of scaling down is much faster than the process of scaling up.</p>"},{"location":"cloud/Resizing-a-virtual-machine-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#troubleshooting","title":"Troubleshooting\ud83d\udd17","text":"<p>If any of the flavor parameters does not match up, the resizing will fail.</p> <p>You will then see balloon help in the right upper corner:</p> <p></p> <p>In this case, the sizes of the disk before and after the resizing do not match.</p>"},{"location":"cloud/Resizing-a-virtual-machine-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You can also resize the virtual machine using only OpenStack CLI. More details here: Resizing a virtual machine using OpenStack CLI on 3Engines Cloud</p>"},{"location":"cloud/Spot-instances-on-3Engines-Cloud.html.html","title":"Spot instances on 3Engines Cloud\ud83d\udd17","text":"<p>Spot instance is resource similar to Amazon EC2 Spot Instances or Google Spot VMs. In short, user is provided with unused computational resources for a discounted price but those resources can be terminated on a short time notice whenever on-demand usage increases. The main use case are ephemeral workflows which can deal with being terminated unexpectedly and/or orchestration platforms which can deal with forced scaling down of available resources e.g. Kubernetes clusters.</p>"},{"location":"cloud/Spot-instances-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>How to create spot instances</li> <li>Additional configuration via tags</li> <li>What is the expected behaviour</li> </ul>"},{"location":"cloud/Spot-instances-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com</p> <p>No. 2 Available exclusively on WAW3-2 cloud</p> <p>When using spot instances, be sure to work only on WAW3-2 cloud:</p> <p></p> <p>No. 3 Using quotas and flavors</p> <p>For quotas, see this article: Dashboard Overview \u2013 Project Quotas And Flavors Limits on 3Engines Cloud</p> <p>No. 3 OpenStack CLI client</p> <p>If you want to interact with 3Engines Cloud cloud using OpenStack CLI client, you need to have it installed. Check one of these articles:</p>"},{"location":"cloud/Status-Power-State-and-dependences-in-billing-of-instances-VMs-on-3Engines-Cloud.html.html","title":"Status Power State and dependencies in billing of instance VMs on 3Engines Cloud\ud83d\udd17","text":"<p>In OpenStack, instances have their own Status and Power State:</p> <ul> <li>Status informs about the present condition of the VM, while</li> <li>Power states tell us only whether virtual machines are running or not.</li> </ul> <p></p> <p>There are six Power states, divided into two groups, depending on whether the VM is running or not.</p>"},{"location":"cloud/Status-Power-State-and-dependences-in-billing-of-instances-VMs-on-3Engines-Cloud.html.html#power-state-while-vm-is-running","title":"Power state while VM is running\ud83d\udd17","text":"NO STATE VM is running, but encountered some fatal errors which require repeat launch of an instance. RUNNING VM is running properly. PAUSED VM is frozen and a memory dump is made."},{"location":"cloud/Status-Power-State-and-dependences-in-billing-of-instances-VMs-on-3Engines-Cloud.html.html#power-state-while-vm-is-turned-off","title":"Power state while VM is turned off\ud83d\udd17","text":"SHUT DOWN VM is powered off properly. CRASHED VM is turned down due to fatal error. SUSPENDED VM is blocked by system (most likely because of negative credit on account)."},{"location":"cloud/Status-Power-State-and-dependences-in-billing-of-instances-VMs-on-3Engines-Cloud.html.html#status-and-its-conditions","title":"Status and its conditions\ud83d\udd17","text":"<p>Status may have one of the following conditions:</p> ERROR <p>Instance is not working due to problems in creation process.</p> <p>User is not charged.</p> ACTIVE <p>Instance is running with the specified image.</p> <p>User is charged for particular chosen flavor and storage.</p> PAUSED <p>Instance is paused with the specified image.</p> <p>User is charged for flavor and storage.</p> SUSPENDED <p>Instance is suspended with the specified image, with a valid memory snapshot.</p> <p>User is charged for flavor and storage.</p> SHUT OFF <p>Instance is powered down by the user and the image is on disk.</p> <p>User is charged for chosen flavor and storage.</p> SHELVED OFFLOADED <p>Instance is removed from the compute host and it can be reverted by \u201cUnshelve instance\u201d button.</p> <p>User is charged for storage.</p> RESIZED/MIGRATED <p>Instance is stopped on the source node but running on the destination node. Images exist at two locations. User confirmation is required.</p> <p>User is charged for the new flavor and storage.</p> <p>Please note that floating IP addresses are billed regardless of instance state.</p>"},{"location":"cloud/VM-created-with-option-Create-New-Volume-No-on-3Engines-Cloud.html.html","title":"VM created with option Create New Volume No on 3Engines Cloud\ud83d\udd17","text":"<p>During creation of a VM you can select a source. If you choose \u201cImage\u201d, you can then choose Yes or No for the option \u201cCreate New Volume\u201d.</p> <p></p> <p>By default No is selected:</p> <p></p> <p>The new Virtual Machine will be created with the System Volume (Root Disk) size as defined in the flavor.</p> <p></p> <p>If you want to select a different size for the System Volume (Root Disk) please read article VM created with option Create New Volume Yes on 3Engines Cloud.</p> <p></p> <p>In contrast to a VM created when choosing Yes, when choosing No the system disk is \u201cephemeral\u201d and will not be visible in the Volumes view.</p> <p></p>"},{"location":"cloud/VM-created-with-option-Create-New-Volume-Yes-on-3Engines-Cloud.html.html","title":"VM created with option Create New Volume Yes on 3Engines Cloud\ud83d\udd17","text":"<p>Note</p> <p>While creating a new Virtual Machine, you have to choose the source from which the VM will be built:</p> <p></p> <p>If you choose \u201cImage\u201d, you can choose the option \u201cCreate New Volume\u201d: Yes or No.</p> <p>By default the option \u201cNo\u201d is chosen.</p> <p>Option: Create New Volume - Yes</p> <p>This option allows you to choose the system volume different from that defined in the flavor:</p> <p></p> <p>Now you can choose the Volume Size.</p> <p>In the example below we will choose the volume 15 GB and apply it to the flavor eo1.xsmall.</p> <p>Default size of the system disk for flavor eo1.xsmall is 8 GB:</p> <p></p> <p>After choosing the other parameters (Details, Flavor, Networks, Security Group and Key Pair) you can launch the instance:</p> <p></p> <p>You can see the VM created:</p> <p></p> <p>If you go to Volumes -&gt; Volumes pane you can see the system volume which the new VM is built on:</p> <p></p> <p>If you click on \u201cEdit Volume\u201d button, you will see that the volume is bootable:</p> <p></p> <p>If you previously have chosen \u201cDelete Volume on Instance Delete\u201d: No</p> <p></p> <p>and now you will delete the VM:</p> <p></p> <p>then the volume will remain (not attached to any instance):</p> <p></p> <p>You can now create a new VM from the volume choosing \u201cLaunch as Instance\u201d:</p> <p></p> <p></p> <p></p> <p>you can choose different new flavor (eg. eo1.xmedium) than original (eo1.xsmall):</p> <p></p> <p>After choosing other parameters (Details, Networks, Security Group and Key Pair) you can launch the instance.</p> <p></p> <p></p>"},{"location":"cloud/What-Image-Formats-are-available-in-OpenStack-3Engines-Cloud-Cloud.html.html","title":"What Image Formats are Available in OpenStack 3Engines Cloud cloud\ud83d\udd17","text":"<p>In 3Engines Cloud OpenStack ten image format extensions are available:</p> <p>QCOW2 - Formatted Virtual Machine Storage is a storage format for virtual machine disk images. QCOW stands for \u201cQEMU copy on write\u201d. It is used with the KVM hypervisor. The images are typically smaller than RAW images, so it is often faster to convert a raw image to qcow2 for uploading instead of uploading the raw file directly. Because raw images do not support snapshots, OpenStack Compute will automatically convert raw image files to qcow2 as needed.</p> <p>RAW - The RAW storage is the simplest one, and is natively supported by both KVM and Xen hypervisors. RAW image could be considered as the bit-equivalent of a block device file. It has a performance advantage over QCOW2 in that no formatting is applied to virtual machine disk images stored in the RAW format. No additional work from hosts is required in Virtual machine data operations on disk images stored in this format.</p> <p>ISO - The ISO format is a disk image formatted with the read-only ISO 9660 filesystem which is used for CDs and DVDs. While ISO is not frequently considered as a virtual machine image format, because of ISOs contain bootable filesystems with an installed operating system, it can be treated like other virtual machine image files.</p> <p>VDI - Virtual Disk Image format used by VirtualBox for image files. None of the OpenStack Compute hypervisors supports VDI directly, so it will be needed to convert these files to a different format to use them.</p> <p>VHD - Virtual Hard Disk format for images, widely used by Microsoft (e.g. Hyper-V, Microsoft Virtual PC).</p> <p>VMDK - Virtual Machine Disk format is used by VMware ESXi hypervisor for images. VMWare\u2019s products use various versions and variations of VMDK disk images, so it\u2019s important to understand where it can be used.</p> <p>PLOOP - A disk format supported and used by Virtuozzo to run OS Containers.</p> <p>AKI/AMI/ARI - was the initial image format supported by Amazon EC2. The image consists of three files:</p> <ul> <li>AKI - Amazon Kernel Image is a kernel file that the hypervisor will load initially to boot the image. For a Linux machine, this would be a vmlinuz file.</li> <li>AMI - Amazon Machine Image is a virtual machine image in raw format, as described above.</li> <li>ARI - Amazon Ramdisk Image is an optional ramdisk file mounted at boot time. For a Linux machine, this would be an initrd file.</li> </ul>"},{"location":"cloud/What-is-an-OpenStack-domain-on-3Engines-Cloud.html.html","title":"What is an OpenStack domain on 3Engines Cloud\ud83d\udd17","text":"<p>Domain</p> <p>Intention of providing a domain in cloud environment is to define boundaries for management. OpenStack domain is a type of a container for projects, users and groups. One crucial benefit is separating overlapping resource names for different domains. Furthermore, permissions in the project and domain are two not related things, hereby customization for administrator is made up much easier.</p> <p>Current domain name is visible beside the project that is currently selected in the Horizon panel.</p> <p></p> <p>The name of the domain is grayed out, denoting that you can use only the domain that has been allocated to you by the system.</p> <p>You cannot create a new domain.</p> <p>Service relation</p> <p>3Engines Cloud account is linked to your main account in particular domain, hence it allows you to login to the OpenStack dashboard without any need to deliver keystone credentials.</p> <p>This type of facility is due to a proper implementation of KeyCloak and KeyStone relation.</p> <p>Docs</p> <p>Click here if you want to see official OpenStack documentation for domains.</p>"},{"location":"cloud/What-is-an-OpenStack-project-on-3Engines-Cloud.html.html","title":"What is an OpenStack project on 3Engines Cloud\ud83d\udd17","text":"<p>A project is a isolated group of zero or more users who share common access with specific privileges to the software instance in OpenStack. A project is created for each set of instances and networks that are configured as a discrete entity for the project. In Compute, a project owns virtual machines (in Compute) or containers (in Object Storage).</p> <p>You can imagine that the whole OpenStack cloud is a big cake of resources (vCPU, disks, instances, etc\u2026) and projects are the pieces of this cake served to the customers.</p> <p>Current project name is visible in the Horizon panel.</p> <p></p> <p>Projects are created, managed, and edited at the OpenStack Projects screen.</p> <p></p> <p>Users can be associated with more than one project, but once signed, they can only see and access the resources available in that project. Each project and user pairing can have a role associated with it.</p> <p>OpenStack users can create projects, and create new accounts using the OpenStack Dashboard. They can also associate other users with roles, projects, or both.</p> <p>To remove project its mandatory to manually remove all of its resources first.</p> <p>Users can create private networks for connectivity within projects How to create a network with router in Horizon Dashboard on 3Engines Cloud. By default, they are fully isolated and are not shared with other projects.</p>"},{"location":"cloud/cloud.html.html","title":"Cloud Services","text":""},{"location":"cloud/cloud.html.html#available-documentation","title":"Available Documentation","text":"<ul> <li>Dashboard Overview \u2013 Project Quotas And Flavors Limits on 3Engines Cloud</li> <li>How to access the VM from OpenStack console on 3Engines Cloud</li> <li>How to clone existing and configured VMs on 3Engines Cloud</li> <li>How to fix unresponsive console issue on 3Engines Cloud</li> <li>How to generate and manage EC2 credentials on 3Engines Cloud</li> <li>How to generate or use Application Credentials via CLI on 3Engines Cloud</li> <li>How to Use GUI in Linux VM on 3Engines Cloud and access it From Local Linux Computer</li> <li>How To Create a New Linux VM With NVIDIA Virtual GPU in the OpenStack Dashboard Horizon on 3Engines Cloud</li> <li>How to install and use Docker on Ubuntu 24.04</li> <li>How to use Security Groups in Horizon on 3Engines Cloud</li> <li>How to create key pair in OpenStack Dashboard on 3Engines Cloud</li> <li>How to create new Linux VM in OpenStack Dashboard Horizon on 3Engines Cloud</li> <li>How to install Python virtualenv or virtualenvwrapper on 3Engines Cloud</li> <li>How to start a VM from a snapshot on 3Engines Cloud</li> <li>Status Power State and dependencies in billing of instance VMs on 3Engines Cloud</li> <li>How to upload your custom image using OpenStack CLI on 3Engines Cloud</li> <li>VM created with option Create New Volume No on 3Engines Cloud</li> <li>VM created with option Create New Volume Yes on 3Engines Cloud</li> <li>What is an OpenStack domain on 3Engines Cloud</li> <li>What is an OpenStack project on 3Engines Cloud</li> <li>How to create a Linux VM and access it from Windows desktop on 3Engines Cloud</li> <li>How to create a Linux VM and access it from Linux command line on 3Engines Cloud</li> <li>DNS as a Service on 3Engines Cloud Hosting</li> <li>What Image Formats are Available in OpenStack 3Engines Cloud cloud</li> <li>How to upload custom image to 3Engines Cloud cloud using OpenStack Horizon dashboard</li> <li>How to create Windows VM on OpenStack Horizon and access it via web console on 3Engines Cloud</li> <li>How to transfer volumes between domains and projects using Horizon dashboard on 3Engines Cloud</li> <li>Spot instances on 3Engines Cloud</li> <li>How to create instance snapshot using Horizon on 3Engines Cloud</li> <li>How to start a VM from instance snapshot using Horizon dashboard on 3Engines Cloud</li> <li>How to create a VM using the OpenStack CLI client on 3Engines Cloud cloud</li> <li>OpenStack User Roles on 3Engines Cloud</li> <li>Resizing a virtual machine using OpenStack Horizon on 3Engines Cloud</li> <li>Block storage and object storage performance limits on 3Engines Cloud</li> </ul>"},{"location":"datavolume/Bootable-versus-non-bootable-volumes-on-3Engines-Cloud.html.html","title":"Bootable versus non-bootable volumes on 3Engines Cloud\ud83d\udd17","text":"<p>Each volume has an indicator called bootable which shows whether an operating system can be booted from it or not. That indicator can be set up manually at any time. If you set it up on a volume that does not contain a bootable operating system and later try to boot a VM from it, you will see an error as a response.</p> <p>In this article we will</p> <ul> <li>explain practical differences between bootable and non-bootable volumes and</li> <li>provide procedures in Horizon and OpenStack CLI to check whether the volume bootable or not.</li> </ul>"},{"location":"datavolume/Bootable-versus-non-bootable-volumes-on-3Engines-Cloud.html.html#bootable-vs-non-bootable-volumes","title":"Bootable vs. non-bootable volumes\ud83d\udd17","text":"<p>Bootable and non-bootable volumes share the following similarities:</p> <ul> <li>Data storage: both types can store data (regardless of being bootable or not)</li> <li>Persistance: they can be retained even if an instance is removed</li> <li>Snapshots: they allow you to create snapshots which represent state of a volume at a particular point in time.</li> </ul> <p>From a snapshot, you can spawn additional volumes so volumes act as a means of both conserving data and transferring of the data.</p> <p>Bootable volumes usually serve as a boot drive for a virtual machine while non-bootable volumes typically function as data storage only. Bootable volumes can also contain data but one part of capacity will be devoted to the operating system that they contain.</p> <p>On the other hand, non-bootable volumes can</p> <ul> <li>add more storage space to an instance (especially for applications which require lots of data) and</li> <li>separate data from the operating system to make backups and data management easier.</li> </ul>"},{"location":"datavolume/Bootable-versus-non-bootable-volumes-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Which volumes appear when creating a virtual machine using Horizon dashboard?</li> <li>Attempting to create a virtual machine from non-bootable volume using OpenStack CLI</li> <li>Checking whether a volume is bootable</li> <li>Checking whether a volume snapshot was created from a bootable volume</li> <li>Modifying bootable status of a volume</li> <li>What happens if you launch a virtual machine from a volume which does not have a functional operating system?</li> </ul>"},{"location":"datavolume/Bootable-versus-non-bootable-volumes-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 OpenStack CLI client operational</p> <p>We assume you are familiar with OpenStack CLI client. If not, here are some articles to get you started:</p>"},{"location":"datavolume/Ephemeral-vs-Persistent-storage-option-Create-New-Volume-on-3Engines-Cloud.html.html","title":"Ephemeral vs Persistent storage option Create New Volume on 3Engines Cloud\ud83d\udd17","text":"<p>Volumes created in the Volumes &gt; Volumes section are persistent storage. They can be attached to a virtual machine and then reattached to a different one. They survive the removal of the virtual machine to which they are connected. You can also clone them, which is a simple way of creating a backup. However, if you copy them, you might also be interested in Volume snapshot inheritance and its consequences on 3Engines Cloud.</p> <p>If you follow the instructions in this article: VM created with option Create New Volume Yes on 3Engines Cloud and set Delete Volume on Instance Delete to No, the boot drive of such virtual machine will also be persistent storage. You can, for example, use this feature to perform various tests and experiments.</p> <p>If you do not need persistent storage, use ephemeral storage. It cannot be reattached to a different machine and will be removed if the machine is removed. See the article VM created with option Create New Volume No on 3Engines Cloud on how to create a virtual machine with this type of storage.</p> <p>You may find more information regarding this topic in the official OpenStack documentation on design storage concepts.</p>"},{"location":"datavolume/How-To-Attach-Volume-To-Windows-VM-On-3Engines-Cloud.html.html","title":"How To Attach Volume To Windows VM On 3Engines Cloud\ud83d\udd17","text":"<p>In this tutorial, you will attach a volume to your Windows virtual machine. It increases the storage available for your files.</p>"},{"location":"datavolume/How-To-Attach-Volume-To-Windows-VM-On-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Creating a new volume</li> <li>Attaching the new volume to a VM</li> <li>Preparing the volume to use with a VM</li> </ul>"},{"location":"datavolume/How-To-Attach-Volume-To-Windows-VM-On-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Windows VM</p> <p>You must operate a Microsoft Windows virtual machine running on 3Engines Cloud cloud. You can access it using the webconsole (How to access the VM from OpenStack console on 3Engines Cloud) or through RDP. If you are using RDP, we strongly recommend using a bastion host for your security: Connecting to a Windows VM via RDP through a Linux bastion host port forwarding on 3Engines Cloud.</p>"},{"location":"datavolume/How-To-Attach-Volume-To-Windows-VM-On-3Engines-Cloud.html.html#step-1-create-a-new-volume","title":"Step 1: Create a New Volume\ud83d\udd17","text":"<p>Login to the Horizon panel available at https://horizon.3Engines.com.</p> <p>Go to the section Volumes -&gt; Volumes:</p> <p></p> <p>Click Create Volume.</p> <p>The following window should appear:</p> <p></p> <p>In it provide the Volume Name of your choice.</p> <p>Choose the Type of your volume - SSD or HDD.</p> <p>Enter the size of your volume in gigabytes.</p> <p>When you\u2019re done, click Create Volume.</p> <p>You should now see the volume you just created. In our case it is called data:</p> <p></p>"},{"location":"datavolume/How-To-Attach-Volume-To-Windows-VM-On-3Engines-Cloud.html.html#step-2-attach-the-volume-to-vm","title":"Step 2: Attach the Volume to VM\ud83d\udd17","text":"<p>Now that you have created your volume, you can use it as storage for one of your VMs. To do that, attach the volume to a VM.</p> <p>Shut down your VM if it is running.</p> <p>In the Actions menu for that volume select the option Manage Attachments:</p> <p></p> <p>You should now see the following window:</p> <p></p> <p>Select the virtual machine to which the volume should be attached from the drop-down menu Attach to Instance and click Attach Volume.</p> <p>Your volume should now be attached to the virtual machine:</p> <p></p>"},{"location":"datavolume/How-To-Attach-Volume-To-Windows-VM-On-3Engines-Cloud.html.html#step-3-format-the-drive","title":"Step 3: Format the Drive\ud83d\udd17","text":"<p>Start your VM and access it using RDP or the webconsole (see Prerequisite 2). Right-click the Start button and from the context menu select Disk Management. You should receive the following window:</p> <p></p> <p>In its lower section are the drives currently connected to your virtual machine:</p> <p></p> <p>In this case (on the screenshot above), there are two drives:</p> <ul> <li>the system drive with 32 GB space</li> <li>the attached volume with 2 GB of unallocated space</li> </ul> <p>Right-click the section of the window in which the label Not Initialized exists:</p> <p></p> <p>From the context menu select Initialize Disk. You should receive the following window:</p> <p></p> <p>In this window you are asked which partition style do you want to use: MBR or GPT. If your volume has 2 TB or less space and you intend to use 4 primary partitions or less, you can use MBR, but if your requirements are higher, you should use GPT.</p> <p>Choose either of these options and click OK.</p> <p>Right-click the Unallocated space:</p> <p></p> <p>Choose New Simple Volume.</p> <p>You should receive the following window:</p> <p></p> <p>Click Next &gt;. The following window should appear:</p> <p></p> <p>If you want your volume to have only one partition, leave the default value in the text field. Otherwise, enter the size of the first partition of your volume.</p> <p>You can choose to either assign a drive letter to your drive or mount it in an empty folder.</p> <ul> <li>If you want to assign a drive letter to that volume, choose the Assign the following drive letter: radio button. From the drop-down menu to its right choose a letter to which you wish to attach your volume. Confirm your choice by clicking OK.</li> <li>If you want to mount the volume to an NTFS folder on your drive, choose Mount in the following empty NTFS folder:. Click Browse\u2026 and in the Browse for Drive Path window choose an empty folder in which you wish to mount it. Confirm your choice by clicking OK.</li> </ul> <p>The following window should now appear:</p> <p></p> <p>Here you can choose the formatting settings. Keep the radio button Format this drive with the following settings: selected. You can now enter the name which Windows will show for your new volume - it can be different then the one you typed in Step 1. Keep Perform a quick format checkbox selected. Click Next &gt;. You should get the following window containing the summary of your chosen settings:</p> <p></p> <p>Click Finish.</p> <p>Once the formatting process is complete, you should see appropriate information about your volume in the Disk Management window:</p> <p></p> <p>Your volume should now be mounted. If you chose to assign a drive letter, it should be visible in the This PC window:</p> <p></p> <p>If you want to create more partitions, repeat right-clicking the Unallocated space and completing the wizard as previously explained.</p>"},{"location":"datavolume/How-To-Attach-Volume-To-Windows-VM-On-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Once you have gathered some data on your volume, you can create its backup, as explained in this article:</p> <p>How to Create Backup of Your Volume From Windows Machine on 3Engines Cloud</p>"},{"location":"datavolume/How-To-Create-Backup-Of-Your-Volume-From-Windows-Machine-on-3Engines-Cloud.html.html","title":"How to Create Backup of Your Volume From Windows Machine on 3Engines Cloud\ud83d\udd17","text":"<p>In this tutorial you will learn how create a backup of your volume on 3Engines Cloud cloud. It allows you to save its state at a certain point in time and, for example, perform some experiments on it. You can then restore the volume to its previous state if you are unhappy with the results.</p> <p>Those backups are stored using object storage. Restoring a backup will delete all data added to a volume after backup was created.</p>"},{"location":"datavolume/How-To-Create-Backup-Of-Your-Volume-From-Windows-Machine-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Disconnecting the volume from a Windows virtual machine</li> <li>Creating a backup of a volume</li> <li>Restoring a backup of a volume</li> <li>Reattaching a volume to your Windows virtual machine</li> </ul>"},{"location":"datavolume/How-To-Create-Backup-Of-Your-Volume-From-Windows-Machine-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Windows VM</p> <p>You must operate a Microsoft Windows virtual machine running on 3Engines Cloud cloud. You can access it using the webconsole (How to access the VM from OpenStack console on 3Engines Cloud) or through RDP. If you are using RDP, we strongly recommend using a bastion host for your security: Connecting to a Windows VM via RDP through a Linux bastion host port forwarding on 3Engines Cloud.</p> <p>No. 3 Volume</p> <p>A volume must be connected to your Windows virtual machine.</p>"},{"location":"datavolume/How-To-Create-Backup-Of-Your-Volume-From-Windows-Machine-on-3Engines-Cloud.html.html#disconnecting-the-volume-from-a-virtual-machine","title":"Disconnecting the volume from a virtual machine\ud83d\udd17","text":"<p>Before creating a backup of your volume, disconnect it.</p> <p>On your virtual machine, right-click the Start menu and select Disk Management.</p> <p></p> <p>A window similar to this should appear:</p> <p></p> <p>Right-click your attached volume and select Change Drive Letter and Paths\u2026. The following window should appear:</p> <p></p> <p>Select the drive letter or mount point of your drive and click Remove.</p> <p>Note</p> <p>If you receive the following warning:</p> <p></p> <p>make sure that the removal does not break your workflow and click Yes.</p> <p>Shut down the virtual machine and return to the Horizon dashboard: https://horizon.3Engines.com</p> <p>Go to Volumes &gt; Volumes. You should see your volume there:</p> <p></p> <p>Select Manage Attachments from the drop-down menu in the Actions column for your volume:</p> <p></p> <p>The following window should appear:</p> <p></p> <p>Click Detach Volume and confirm your choice.</p>"},{"location":"datavolume/How-To-Create-Backup-Of-Your-Volume-From-Windows-Machine-on-3Engines-Cloud.html.html#creating-a-backup-of-your-volume","title":"Creating a Backup of Your Volume\ud83d\udd17","text":"<p>Now that you have detached the volume from your virtual machine, you can make its backup by following these steps:</p> <ul> <li>Go to Volumes &gt; Volumes.</li> <li>Choose Create Backup from the drop-down menu in the Actions column for your volume. You should get the following window:</li> </ul> <p></p> <ul> <li>Enter the chosen name for your backup in the Backup Name text field. Optionally, you can provide its description in the Description text field. Once you\u2019re ready, click Create Volume Backup.</li> </ul> <p>Clicking Create Volume Backup initializes the process of backup creation and moves you to the Volumes &gt; Backups section of the Horizon dashboard. There you should see that your backup is being created.</p> <p>The time it takes to create the backup will vary.</p> <p>Once the process is over, you should see the status Available next to your backup:</p> <p></p>"},{"location":"datavolume/How-To-Create-Backup-Of-Your-Volume-From-Windows-Machine-on-3Engines-Cloud.html.html#restoring-the-backup","title":"Restoring the backup\ud83d\udd17","text":"<p>There are two ways of restoring a backup:</p> <ul> <li>You can overwrite an existing volume with the content of the backup. This will delete all data that currently resides on that volume.</li> <li>You can create a new volume based on the content of the backup.</li> </ul> <p>Go to Volumes &gt; Backups section of the Horizon dashboard. In the Actions column of the appropriate backup, choose Restore Backup. The following window should appear:</p> <p></p> <p>Use Select Volume drop-down list to select the volume which your backup will replace. You can also create a new volume from that backup by choosing Create a New Volume.</p> <p>In either case, click Restore Backup to Volume. This will initialize the process of restoring backup and move you to the Volumes &gt; Volumes section of the Horizon dashboard.</p> <p>Once this operation is completed, you should see the status Available next to your volume:</p> <p></p> <p>You can now reattach the volume to your virtual machine.</p>"},{"location":"datavolume/How-To-Create-Backup-Of-Your-Volume-From-Windows-Machine-on-3Engines-Cloud.html.html#reattaching-the-volume-to-your-virtual-machine","title":"Reattaching the volume to your virtual machine\ud83d\udd17","text":"<p>In the Volumes &gt; Volumes section of the Horizon dashboard, find the row containing your volume. Choose Manage Attachments from the drop-down menu in the Actions column for it. You should get the following window:</p> <p></p> <p>From the drop-down menu Attach To Instance choose the name of the virtual machine to which the volume was previously attached. Click Attach Volume.</p> <p>Attaching should take up to a few seconds. Once it is completed, you should see appropriate information in the Attached To column for your volume:</p> <p></p> <p>Go to Compute &gt; Instances. Choose Start Instance in the row with the virtual machine to which the volume has just been attached. Login to your Windows VM using RDP or the webconsole (see Prerequisite No. 2).</p> <p>On your virtual machine, right-click the Start menu and select Disk Management. You should receive the following window:</p> <p></p> <p>Note</p> <p>If at the bottom of the screen you see status Offline next to your attached volume, right-click it and choose Online from the context menu.</p> <p></p> <p>Right-click your attached volume and select Change Drive Letter and Paths\u2026. The window titled Change Drive Letter and Paths for New Volume should appear:</p> <p></p> <p>Click Add\u2026. The following window should appear:</p> <p></p> <p>You can choose to either assign a drive letter to your drive or mount it in an empty folder.</p> <ul> <li>If you want to assign a drive letter to that volume, choose the Assign the following drive letter: radio button. From the drop-down menu to its right choose a letter to which you wish to attach your volume. Confirm your choice by clicking OK.</li> <li>If you want to mount the volume to an NTFS folder on your drive, choose Mount in the following empty NTFS folder:. Click Browse\u2026 and in the Browse for Drive Path window choose an empty folder in which you wish to mount it. Confirm your choice by clicking OK.</li> </ul> <p>Your volume should now be mounted. If you chose to assign a drive letter, it should be visible in the This PC window:</p> <p></p>"},{"location":"datavolume/How-many-objects-can-I-put-into-Object-Storage-container-bucket-on-3Engines-Cloud.html.html","title":"How many objects can I put into Object Storage container bucket on 3Engines Cloud\ud83d\udd17","text":"<p>It is highly advisable to put no more than 1 million (1 000 000) objects into one bucket (container). Having more objects makes listing of them very inefficient. We suggest to create many buckets with a small amount of objects instead of a small amount of buckets with many objects.</p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-less-than-2TB-on-Linux-on-3Engines-Cloud.html.html","title":"How to attach a volume to VM less than 2TB on Linux on 3Engines Cloud\ud83d\udd17","text":"<p>In this tutorial, you will create a volume which is smaller than 2 TB. Then, you will attach it to a VM and format it in the appropriate way.</p> <p>Note</p> <p>If you want to create and attach a volume that has more than 2 TB of storage, you will need to use different software for its formatting. If this is the case, please visit the following article instead: How to attach a volume to VM more than 2TB on Linux on 3Engines Cloud.</p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-less-than-2TB-on-Linux-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Creating a new volume</li> <li>Attaching the new volume to a VM</li> <li>Formatting and mounting of the new volume</li> </ul>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-less-than-2TB-on-Linux-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Linux VM running on the 3Engines Cloud cloud</p> <p>Instructions for creating and accessing a Linux VM using default images can be found here:</p> <p>How to create a Linux VM and access it from Linux command line on 3Engines Cloud</p> <p>or here:</p> <p>How to create a Linux VM and access it from Windows desktop on 3Engines Cloud.</p> <p>The instructions included in this article are designed for Ubuntu 22.04 LTS.</p> <p>No. 3 Basic knowledge of the Linux terminal</p> <p>You will need basic knowledge of the Linux command line.</p> <p>No. 4 SSH access to the VM</p> <p>How to connect to your virtual machine via SSH in Linux on 3Engines Cloud.</p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-less-than-2TB-on-Linux-on-3Engines-Cloud.html.html#step-1-create-a-volume","title":"Step 1: Create a Volume\ud83d\udd17","text":"<p>Login to the Horizon panel available at https://horizon.3Engines.com.</p> <p>Go to the section Volumes -&gt; Volumes:</p> <p></p> <p>Click Create Volume.</p> <p>The following window should appear:</p> <p></p> <p>In it provide the Volume Name of your choice.</p> <p>Choose the Type of your volume - SSD or HDD.</p> <p>Enter the size of your volume in gigabytes.</p> <p>When you\u2019re done, click Create Volume.</p> <p>You should now see the volume you just created. In our case it is called volume-small:</p> <p></p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-less-than-2TB-on-Linux-on-3Engines-Cloud.html.html#step-2-attach-the-volume-to-vm","title":"Step 2: Attach the Volume to VM\ud83d\udd17","text":"<p>Now that you have created your volume, you can use it as storage for one of your VMs. To do that, attach the volume to a VM.</p> <p>In the Actions menu for that volume select the option Manage Attachments:</p> <p></p> <p>You should now see the following window:</p> <p></p> <p>Select the virtual machine to which the volume should be attached:</p> <p></p> <p>Click Attach Volume.</p> <p>Your volume should now be attached to the VM:</p> <p></p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-less-than-2TB-on-Linux-on-3Engines-Cloud.html.html#step-3-partition-the-volume","title":"Step 3: Partition the Volume\ud83d\udd17","text":"<p>It is time to access your virtual machine to prepare the volume for data storage.</p> <p>Connect to your virtual machine using SSH or the web console.</p> <p>Execute the following command to make sure that the volume has been attached:</p> <pre><code>lsblk\n</code></pre> <p>You should see the output similar to this:</p> <p></p> <p>In this example, the attached volume that was previously called volume-small is represented by the device file sdb. Its size is 100 GB. Memorize the name of the device file representing the drive you attached or write it somewhere down - it will be needed later during starting fdisk.</p> <p>In order to be able to use the volume as storage, you will need to use fdisk to create a partition table.</p> <p>Start fdisk (replace sdb with the name of the device file provided to you previously by the lsblk command):</p> <pre><code>sudo fdisk /dev/sdb\n</code></pre> <p>You should now see the following prompt:</p> <pre><code>Command (m for help):\n</code></pre> <p>Answer with n and press Enter. A series of prompts similar to the ones below will appear on screen - keep pressing Enter on your keyboard to accept the default values.</p> <pre><code>Partition type\np primary (0 primary, 0 extended, 4 free)\ne extended (container for logical partitions)\nSelect (default p):\n\nUsing default response p.\nPartition number (1-4, default 1):\nFirst sector (2048-209715199, default 2048):\nLast sector, +/-sectors or +/-size{K,M,G,T,P} (2048-209715199, default 209715199):\n</code></pre> <p>You should now see the confirmation similar to this:</p> <pre><code>Created a new partition 1 of type 'Linux' and of size 100 GiB.\n</code></pre> <p>After it you will see the following prompt again:</p> <pre><code>Command (m for help):\n</code></pre> <p>This time, answer with w. You will see the following message:</p> <pre><code>The partition table has been altered.\nCalling ioctl() to re-read partition table.\nSyncing disks.\n</code></pre> <p>Execute the following command again to confirm that the partition was created successfully:</p> <pre><code>lsblk\n</code></pre> <p>The device file of the new partition should have the same name as the device file of the drive followed by the 1 digit. In this case, it will be sdb1. Memorize or write it somewhere down - it will be needed later during creation of the file system.</p> <p></p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-less-than-2TB-on-Linux-on-3Engines-Cloud.html.html#step-5-create-the-file-system","title":"Step 5: Create the File System\ud83d\udd17","text":"<p>In order to save data on this volume, create ext4 filesystem on it. ext4 is arguably the most popular filesystem on Linux distributions.</p> <p>It can be created by executing the following command:</p> <pre><code>sudo mkfs.ext4 /dev/sdb1\n</code></pre> <p>Replace sdb1 with the name of the device file of the partition provided to you previously by the lsblk command.</p> <p>This process should take less than a minute.</p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-less-than-2TB-on-Linux-on-3Engines-Cloud.html.html#step-6-create-the-mount-point","title":"Step 6: Create the mount point\ud83d\udd17","text":"<p>You need to specify the location in the directory structure from which you will access the data stored on that volume. In Linux it is typically done in the /etc/fstab config file.</p> <p>Below are the instructions for the nano text editor. If you prefer to use different software, please modify them accordingly.</p> <p>Install nano if you haven\u2019t already:</p> <pre><code>sudo apt install nano\n</code></pre> <p>Open the /etc/fstab file using nano:</p> <pre><code>sudo nano /etc/fstab\n</code></pre> <p>Add the below line to the end of that file. Remember to replace sdb1 with the name of the device file of your partition (it was provided to you previously be the lsblk command) and /my_volume with the directory in which you want to mount it - the mounting point.</p> <pre><code>/dev/sdb1 /my_volume ext4 defaults 0 1\n</code></pre> <p></p> <p>To save that file in nano, use the following combination of keys: CTRL+X, Y, Enter.</p> <p>Warning</p> <p>Unless you know what you\u2019re doing, you should not modify the lines which you already found in the /etc/fstab file. This file contains important information. Some or all of it might be required for the startup of the operating system.</p> <p>Next, create a new or use an existing directory to use as your mounting point. If you need to create it anew, the command would be:</p> <pre><code>sudo mkdir /my_volume\n</code></pre> <p>Mount the volume to your system (replace /my_volume with your mount point):</p> <pre><code>sudo mount /my_volume\n</code></pre> <p>To check whether it was successfully mounted, execute:</p> <pre><code>df -h\n</code></pre> <p>The output should look like this:</p> <p></p> <p>The volume is owned by root, so eouser does not have access without sudo. To make it accessible to eouser, execute this command:</p> <pre><code>sudo chown eouser:eouser /my_volume\n</code></pre> <p>If you want everybody to have access to that directory (and you don\u2019t care about security at all), use the following command:</p> <pre><code>sudo chmod 777 /my_volume\n</code></pre> <p>During the next boot of your virtual machine, the volume should be mounted automatically.</p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-less-than-2TB-on-Linux-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You have successfully created a volume and prepared it for use on a Linux virtual machine.</p> <p>You can now copy files to your new volume. If you want to move the data, attach the volume to a different machine.</p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-more-than-2TB-on-Linux-on-3Engines-Cloud.html.html","title":"How to attach a volume to VM more than 2TB on Linux on 3Engines Cloud\ud83d\udd17","text":"<p>In this tutorial, you will create a volume which is larger than 2 TB. Then, you will attach it to a VM and format it in the appropriate way.</p> <p>Note</p> <p>If you want to create and attach a volume that has less than 2 TB of storage, you will need to use different software for its formatting. If this is the case, please visit the following article instead: How to attach a volume to VM less than 2TB on Linux on 3Engines Cloud.</p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-more-than-2TB-on-Linux-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Creating a new volume</li> <li>Attaching the new volume to a VM</li> <li>Formatting and mounting of the new volume</li> </ul>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-more-than-2TB-on-Linux-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Linux VM running on 3Engines Cloud cloud</p> <p>Instructions for creating and accessing a Linux VM using default images can be found here:</p> <p>How to create a Linux VM and access it from Linux command line on 3Engines Cloud or here:</p> <p>How to create a Linux VM and access it from Windows desktop on 3Engines Cloud.</p> <p>The instructions included in this article are designed for Ubuntu 20.04 LTS.</p> <p>No. 3 Basic knowledge of the Linux terminal</p> <p>You will need basic knowledge of the Linux command line.</p> <p>No. 4 SSH access to the VM</p> <p>How to connect to your virtual machine via SSH in Linux on 3Engines Cloud.</p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-more-than-2TB-on-Linux-on-3Engines-Cloud.html.html#step-1-create-a-volume","title":"Step 1: Create a Volume\ud83d\udd17","text":"<p>Login to the Horizon panel available at https://horizon.3Engines.com.</p> <p>Go to the section Volumes -&gt; Volumes:</p> <p></p> <p>Click Create Volume.</p> <p>The following window should appear:</p> <p></p> <p>In it provide the Volume Name of your choice.</p> <p>Choose the Type of your volume - SSD or HDD.</p> <p>Enter the size of your volume in gigabytes.</p> <p>When you\u2019re done, click Create Volume.</p> <p>You should now see the volume you just created. In our case it is called my-files:</p> <p></p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-more-than-2TB-on-Linux-on-3Engines-Cloud.html.html#step-2-attach-the-volume-to-vm","title":"Step 2: Attach the Volume to VM\ud83d\udd17","text":"<p>Now that you have created your volume, you can use it as storage for one of your VMs. To do that, attach the volume to a VM.</p> <p>In the Actions menu for that volume select the option Manage Attachments:</p> <p></p> <p>You should now see the following window:</p> <p></p> <p>Select the virtual machine to which the volume should be attached:</p> <p></p> <p>Click Attach Volume.</p> <p>Your volume should now be attached to the VM:</p> <p></p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-more-than-2TB-on-Linux-on-3Engines-Cloud.html.html#step-3-create-the-partition-table","title":"Step 3: Create the Partition Table\ud83d\udd17","text":"<p>It is time to access your virtual machine to prepare the volume for data storage.</p> <p>Connect to your virtual machine using SSH or the web console.</p> <p>Execute the following command to make sure that the volume has been attached:</p> <pre><code>lsblk\n</code></pre> <p>You should see the output similar to this:</p> <p></p> <p>In this example, the attached volume that was previously called my-files is represented by the device file sdb. Its size is 2.4 TB. Memorize the name of the device file or write it somewhere down; it will be needed in the next step, which involves starting gdisk.</p> <p>In order to be able to use the volume as storage, you will need to use gdisk to create a partition table. If you do not have this program, you can install it using the following command:</p> <pre><code>sudo apt update &amp;&amp; sudo apt upgrade &amp;&amp; sudo apt install gdisk\n</code></pre> <p>Start gdisk (replace sdb with the name of the device file provided to you previously by the lsblk command):</p> <pre><code>sudo gdisk /dev/sdb\n</code></pre> <p>You should see the output similar to this:</p> <pre><code>GPT fdisk (gdisk) version 1.0.8\n\nPartition table scan:\n MBR: not present\n BSD: not present\n APM: not present\n GPT: not present\n\nCreating new GPT entries in memory.\n\nCommand (? for help):\n</code></pre> <p>Answer with n and press Enter. A series of prompts similar to the ones below will appear on screen - keep pressing Enter on your keyboard to accept the default values.</p> <pre><code>Command (? for help): n\nPartition number (1-128, default 1):\nFirst sector (34-5033164766, default = 2048) or {+-}size{KMGTP}:\nLast sector (2048-5033164766, default = 5033164766) or {+-}size{KMGTP}:\nCurrent type is 8300 (Linux filesystem)\nHex code or GUID (L to show codes, Enter = 8300):\nChanged type of partition to 'Linux filesystem'\n</code></pre> <p>You will see the prompt Command (? for help): again. Answer it with w. You will now see the following question:</p> <pre><code>Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING\nPARTITIONS!!\n\nDo you want to proceed? (Y/N):\n</code></pre> <p>Answer with Y to confirm. You should get the following confirmation:</p> <pre><code>OK; writing new GUID partition table (GPT) to /dev/sdb.\n</code></pre> <p>In the end, you should receive this message:</p> <pre><code>The operation has completed successfully.\n</code></pre> <p>Execute the following command again to confirm that the partition was created successfully:</p> <pre><code>lsblk\n</code></pre> <p>The device file of the new partition should have the same name as the device file of the drive followed by the 1 digit. In this case, it will be sdb1. Memorize or write it somewhere down - it will be needed later during creation of the file system.</p> <p></p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-more-than-2TB-on-Linux-on-3Engines-Cloud.html.html#step-5-create-the-file-system","title":"Step 5: Create the File System\ud83d\udd17","text":"<p>In order to save data on this volume, create ext4 filesystem on it. ext4 is arguably the most popular filesystem on Linux distributions.</p> <p>It can be created by executing the following command:</p> <pre><code>sudo mkfs.ext4 /dev/sdb1\n</code></pre> <p>Replace sdb1 with the name of the device file of the partition provided to you previously by the lsblk command.</p> <p>This process took less than a minute for a 2,4 terabyte volume.</p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-more-than-2TB-on-Linux-on-3Engines-Cloud.html.html#step-6-create-the-mount-point","title":"Step 6: Create the mount point\ud83d\udd17","text":"<p>You need to specify the location in the directory structure from which you will access the data stored on that volume. In Linux it is typically done in the /etc/fstab config file.</p> <p>Below are the instructions for the nano text editor. If you prefer to use different software, please modify them accordingly.</p> <p>Before using nano, create the directory in which you wish to mount your volume - your mount point - (if it doesn\u2019t exist yet). In this example, we will use the /my_volume directory which can be created using the following command:</p> <pre><code>sudo mkdir /my_volume\n</code></pre> <p>Install nano if you haven\u2019t already:</p> <pre><code>sudo apt install nano\n</code></pre> <p>Open the /etc/fstab file using nano:</p> <pre><code>sudo nano /etc/fstab\n</code></pre> <p>Add the below line to the end of that file. Remember to replace sdb1 with the name of the device file of your partition (it was provided to you previously be the lsblk command) and /my_volume with the directory in which you want to mount it - the mounting point.</p> <pre><code>/dev/sdb1 /my_volume ext4 defaults 0 1\n</code></pre> <p></p> <p>To save that file in nano, use the following combination of keys: CTRL+X, Y, Enter.</p> <p>Warning</p> <p>Unless you know what you\u2019re doing, you should not modify the lines which you already found in the /etc/fstab file. This file contains important information. Some or all of it might be required for the startup of the operating system.</p> <p>Mount the volume to your system (replace /my_volume with the mount point you previously created):</p> <pre><code>sudo mount /my_volume\n</code></pre> <p>To check whether it was successfully mounted, execute:</p> <pre><code>df -h\n</code></pre> <p>The output should contain the line with the device file representing your volume and its mount point. It can look like this:</p> <p></p> <p>The volume is owned by root, so eouser does not have access without sudo. To make it accessible to eouser, execute this command:</p> <pre><code>sudo chown eouser:eouser /my_volume\n</code></pre> <p>If you want everybody to have access to that directory (and you don\u2019t care about security at all), use the following command:</p> <pre><code>sudo chmod 777 /my_volume\n</code></pre> <p>During the next boot of your virtual machine, the volume should be mounted automatically.</p>"},{"location":"datavolume/How-to-attach-a-volume-to-VM-more-than-2TB-on-Linux-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You have successfully created a volume larger than 2 TB and prepared it for use on a Linux virtual machine.</p> <p>You can now copy files to your new volume. If you want to move the data, attach the volume to a different machine.</p>"},{"location":"datavolume/How-to-create-or-delete-volume-snapshot-on-3Engines-Cloud.html.html","title":"How to create or delete volume snapshot on 3Engines Cloud\ud83d\udd17","text":"<p>Volume snapshot allows you to save the state of volume at a specific point in time. Here is how to create or delete volume snapshot using Horizon dashboard or OpenStack CLI client.</p>"},{"location":"datavolume/How-to-create-or-delete-volume-snapshot-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with access to Horizon interface: https://horizon.3Engines.com</p> <p>No. 2 A volume</p> <p>You need to have the volume which will serve as a source of your volume snapshot.</p> <p>To prevent data corruption while creating a snapshot, the volume should not be connected to a virtual machine. If it is, disconnect it from the volume using one of these articles:</p>"},{"location":"datavolume/How-to-create-volume-Snapshot-and-attach-as-Volume-on-Linux-or-Windows-on-3Engines-Cloud.html.html","title":"How to create volume Snapshot and attach as Volume on Linux or Windows on 3Engines Cloud\ud83d\udd17","text":"<p>To create a snapshot of a Volume:</p> <ul> <li>Click Volumes tab in Horizon and choose \u201cCreate Snapshot\u201d from dropdown menu.</li> <li>Unmap disk in Windows VM, then in Horizon click on option \u201cManage Attachments\u201d -&gt; \u201cDetach Volume\u201d</li> </ul> <p>It is possible to create snapshot of attached volume but if any data are writen on it while creating snapshot, it might result in corrupted volume.</p> <ul> <li>Convert snapshot into Volume - \u201cVolume Snapshots\u201d -&gt; \u201cCreate Volume\u201d.</li> <li>Map new Volume in your Windows VM.</li> </ul> <p>For Linux systems you may mount that newly created Volume and access the data, the filesystem is the same as on the original Volume.</p> <p>For example, if the Volume has one partition and is attached as /dev/vdc, do</p> <pre><code>sudo mkdir /my_snapshot1 &amp;&amp; sudo mount /dev/vdc1 /my_snapshot1\n</code></pre>"},{"location":"datavolume/How-to-export-a-volume-over-NFS-on-3Engines-Cloud.html.html","title":"How to export a volume over NFS on 3Engines Cloud\ud83d\udd17","text":"<p>Server configuration</p> <p>Update your system:</p> <pre><code>sudo apt update &amp;&amp; apt upgrade\n</code></pre> <p>Install necessary packages:</p> <pre><code>sudo apt install nfs-kernel-server\n</code></pre> <p>Create a new folder, which will be exported by NFS, e.g.:</p> <pre><code>sudo mkdir -p /mnt/&lt;name of your folder&gt;\n</code></pre> <p>Delete all access restrictions in the folder:</p> <pre><code>sudo chown -R nobody:nogroup /mnt/&lt;name of your folder&gt;/\n</code></pre> <p>You can also replace permisssions of files in the folder with your own preferences.</p> <p>e.g.</p> <pre><code>sudo chmod 777 /mnt/&lt;name of your folder&gt;/\n</code></pre> <p>Defining access permisions to NFS Server.</p> <p>In the file /etc/exports add the following line:</p> <pre><code>/mnt/&lt;name of your folder&gt; &lt;IP address of allowed client&gt;(rw,sync,no_subtree_check)\n</code></pre> <p>where is the address of the server allowed to access the folder /mnt/ <p>e.g.</p> <pre><code># /etc/exports: the access control list for filesystems which may be exported\n\n# to NFS clients. See exports(5).\n\n#\n\n# Example for NFSv2 and NFSv3:\n\n# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)\n\n#\n\n# Example for NFSv4:\n\n# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)\n\n# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)\n\n#\n\n/mnt/&lt;name of your folder&gt; &lt;IP address of NFS server&gt;(rw,sync,no_subtree_check)\n</code></pre> <p>You can also share your folder to more IP addresses:</p> <pre><code>/mnt/&lt;name of your folder&gt; &lt;IP address 1&gt;(rw,sync,no_subtree_check)\n/mnt/&lt;name of your folder&gt; &lt;IP address 2&gt;(rw,sync,no_subtree_check)\n/mnt/&lt;name of your folder&gt; &lt;IP address 3&gt;(rw,sync,no_subtree_check)\n</code></pre> <p>You can also share the folder to all servers in a Subnet (instead of adding every IP address separately) by adding following line to /etc/exports (e.g. servers in 192.168.0.0/24):</p> <pre><code>/mnt/&lt;name of your folder&gt; 192.168.0.0/24(rw,sync,no_subtree_check)\n</code></pre> <p>When the configuration file /etc/exports is saved, invoke the following commands:</p> <pre><code>sudo exportfs -a\nsudo systemctl restart nfs-kernel-server\n</code></pre> <p>IT IS NECESSARY TO OPEN THE PORT 2049 IN A SECURITY GROUP!</p> <p>(The FAQ about opening ports in a security group is available at How can I open new ports for http for my service or instance on 3Engines Cloud)</p> <p>Client Configuration</p> <p>Install required packages:</p> <pre><code>sudo apt install nfs-common\n</code></pre> <p>Mount the NFS folder:</p> <pre><code>sudo mount &lt;IP address of your NFS server&gt;:/mnt/&lt;name of your folder in NFS server&gt; &lt;name of your folder in Client&gt;/\n</code></pre>"},{"location":"datavolume/How-to-export-a-volume-over-NFS-outside-of-a-project-on-3Engines-Cloud.html.html","title":"How to export a volume over NFS outside of a project on 3Engines Cloud\ud83d\udd17","text":"<p>Prerequisites</p> <p>Two Ubuntu servers in various projects (not in a private network) which all have a floating IP assigned.</p> <pre><code>Host: 64.225.128.1\nClient: 64.225.128.2\n</code></pre> <p>On both servers we will create a directory /xdata, which will be shared.</p> <p>On the host</p> <pre><code>eouser@host:~$ sudo apt-get update\neouser@host:~$ sudo apt-get install nfs-kernel-server\neouser@host:~$ sudo mkdir /xdata\neouser@host:~$ sudo chown nobody:nogroup /xdata\neouser@host:~$ sudo nano /etc/exports\n</code></pre> <p>Add the line:</p> <pre><code>/xdata 64.225.128.2(rw,sync,no_subtree_check)\n</code></pre> <p>Save the file.</p> <p>Start the server:</p> <pre><code>eouser@host:~$ sudo systemctl restart nfs-kernel-server\n</code></pre> <p>For Ubuntu, start the server with this command:</p> <pre><code>eouser@host:~$ sudo service nfs-kernel-server start\n</code></pre> <p>Now go to https://horizon.3Engines.com/project/security_groups/</p> <p>Create new security group.</p> <p>Give it a name (eg. allow_nfs) and save by clicking \u201cCreate security group\u201d button.</p> <p>Click \u201cmanage rules\u201d.</p> <p>Click \u201cadd rule\u201d</p> <p>Choose:</p> <p>Rule: Custom TCP Rule</p> <p>Direction: Ingress</p> <p>Openport: Port</p> <p>Port: 2049</p> <p>Remote: CIDR</p> <p>CIDR: 64.225.128.2</p> <p>Click \u201cAdd\u201d</p> <p>Go to https://horizon.3Engines.com/project/instances/</p> <p>From the drop-down menu on the right of the \u201cHost\u201d instance, choose \u201cEdit Security Groups\u201d.</p> <p>Click on the \u201cplus\u201d sign on the \u201callow_nfs\u201d group.</p> <p>This will move the group from \u201cAll Security Groups\u201d to \u201cInstance Security Groups\u201d.</p> <p>Click \u201cSave\u201d.</p> <p>On the Client</p> <pre><code>eouser@client:~$ sudo apt-get update\neouser@client:~$ sudo apt-get install nfs-common\neouser@client:~$ sudo mkdir /xdata\neouser@client:~$ sudo mount 64.225.128.1:/xdata /xdata\n</code></pre> <p>You can check if the directory is mounted:</p> <pre><code>eouser@client:~$ df -h\n</code></pre>"},{"location":"datavolume/How-to-extend-the-volume-in-Linux-on-3Engines-Cloud.html.html","title":"How to extend the volume in Linux on 3Engines Cloud\ud83d\udd17","text":"<p>It is possible to extend a Volume from the Horizon dashboard.</p> <p>Another method is to create a new volume, attach it to a VM, copy all the data from the old volume to the new one, check if all the data is properly copied, then detach and delete the old one. Not all filesystems are resizable.</p> <p>Warning</p> <ol> <li>It is strongly recommended to backup the volume by creating Volume Snapshot before proceeding with extending the volume.</li> </ol> <p>Warning</p> <ol> <li>If you have a volume &lt; 2TB and you want to extend it above 2TB, please do not follow below instructions. Instead please create a new volume, format it according to another article: How to attach a volume to VM more than 2TB on Linux on 3Engines Cloud, attach it to the VM, copy the data from the old volume to the new one, check if it is fully copied, detach and delete the old volume.</li> </ol> <p>You may use following guide to backup the volume: How to create volume Snapshot and attach as Volume on Linux or Windows on 3Engines Cloud</p> <p>Resizing the volume:</p> <p>In this tutorial we will resize a 1GB volume to 5GB.</p> <p>First we need to extend the volume in Horizon.</p> <p>Let\u2019s say that we have a 1GB volume attached to our instance as /dev/vdb:</p> <p></p> <p>And we have it mounted in our Linux machine as /dev/vdb1:</p> <pre><code>eouser@vm-john-01:~$ df -kh\nFilesystem Size Used Avail Use% Mounted on\nudev 1.9G 0 1.9G 0% /dev\ntmpfs 394M 640K 393M 1% /run\n/dev/vda1 15G 2.7G 12G 19% /\ntmpfs 2.0G 0 2.0G 0% /dev/shm\ntmpfs 5.0M 0 5.0M 0% /run/lock\ntmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup\ntmpfs 394M 0 394M 0% /run/user/1001\ntmpfs 394M 0 394M 0% /run/user/1000\n/dev/vdb1 991M 2.6M 922M 1% /my_volume\n</code></pre> <p>We already have some data on it, we don\u2019t want to lose.</p> <p>First we need to unmount it in Linux:</p> <pre><code>eouser@vm-john-01:~$ sudo umount /dev/vdb1\n</code></pre> <p>Then detach it in Horizon by clicking \u201cManage Attachments\u201d &gt; \u201cDetach Volume\u201d:</p> <p></p> <p></p> <p>After detaching we will have the \u201cExtend Volume\u201d option available.</p> <p></p> <p></p> <p>We enter a new size, for example 5GB and click \u201cExtend Volume\u201d:</p> <p></p> <p>Our new volume size is 5GB. Reattach it to your Virtual Machine again. Now we need to extend our /dev/vdb partition in Linux.</p> <p>Expand the modified partition using growpart (and note the unusual syntax of separating the device name from the partition number):</p> <pre><code>eouser@vm-john-01:~$ sudo growpart /dev/vdb 1\nCHANGED: partition=1 start=2048 old: size=2095104 end=2097152 new: size=10483679 end=10485727\n</code></pre> <p>Next use resize2fs:</p> <pre><code>eouser@vm-john-01:~$ sudo resize2fs /dev/vdb1\nresize2fs 1.45.5 (07-Jan-2020)\nPlease run 'e2fsck -f /dev/vdb1' first.\n</code></pre> <p>Most of the time a filesystem check will be recommended by the system.</p> <pre><code>eouser@vm-john-01:~$ sudo e2fsck -f /dev/vdb1\ne2fsck 1.45.5 (07-Jan-2020)\nPass 1: Checking inodes, blocks, and sizes\nPass 2: Checking directory structure\nPass 3: Checking directory connectivity\nPass 4: Checking reference counts\nPass 5: Checking group summary information\n/dev/vdb1: 11/65536 files (0.0% non-contiguous), 8859/261888 blocks\n</code></pre> <p>After doing e2fsck we proceed with extending partition:</p> <pre><code>eouser@vm-john-01:~$ sudo resize2fs /dev/vdb1\nresize2fs 1.45.5 (07-Jan-2020)\nResizing the filesystem on /dev/vdb1 to 1310459 (4k) blocks.\nThe filesystem on /dev/vdb1 is now 1310459 (4k) blocks long.\n</code></pre> <p>We can now mount our extended volume again.</p> <pre><code>eouser@vm-john-01:~$ sudo mount /dev/vdb1\n</code></pre> <pre><code>eouser@vm-john-01:~$ sudo df -kh\nFilesystem Size Used Avail Use% Mounted on\nudev 1.9G 0 1.9G 0% /dev\ntmpfs 394M 640K 393M 1% /run\n/dev/vda1 15G 2.7G 12G 19% /\ntmpfs 2.0G 0 2.0G 0% /dev/shm\ntmpfs 5.0M 0 5.0M 0% /run/lock\ntmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup\ntmpfs 394M 0 394M 0% /run/user/1001\ntmpfs 394M 0 394M 0% /run/user/1000\n/dev/vdb1 5.0G 4.0M 4.7G 1% /my_volume\n</code></pre> <p>The new size is now 5GB and the data that was previously there is intact.</p>"},{"location":"datavolume/How-to-mount-object-storage-in-Linux-on-3Engines-Cloud.html.html","title":"How to mount object storage in Linux on 3Engines Cloud\ud83d\udd17","text":"<p>S3 is a protocol for storing and retrieving data on and from remote servers. The user has their own S3 account and is identified by a pair of identifiers, which are called Access Key and Secret Key. These keys act as a username and password for your S3 account.</p> <p>Usually, for desktop computers we refer to files within a directory. In S3 terminology, file is called \u201cobject\u201d and its name is called \u201ckey\u201d. The S3 term for directory (or folder) is \u201cbucket\u201d. To mount object storage in your Linux computer, you will use command s3fs.</p>"},{"location":"datavolume/How-to-mount-object-storage-in-Linux-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>Prerequisite No. 1 Hosting</p> <p>To use s3 protocol, you need a 3Engines Cloud hosting account. It comes with graphical user interface called Horizon: https://horizon.3Engines.com but you can also use s3 commands from terminal in various operating systems.</p> <p>Prerequisite No. 2 Valid EC2 credentials</p> <p>The Access Key and Secret Key for access to an s3 account are also called the \u201cEC2 credentials\u201d. See article</p> <p>How to generate and manage EC2 credentials on 3Engines Cloud</p> <p>At this point, you should have access to the cloud environment, using the OpenStack CLI client. It means that the command openstack is operational.</p>"},{"location":"datavolume/How-to-mount-object-storage-in-Linux-on-3Engines-Cloud.html.html#check-your-credentials-and-save-them-in-a-file","title":"Check your credentials and save them in a file\ud83d\udd17","text":"<p>Check your credentials with the following command:</p> <pre><code>openstack ec2 credentials list\n</code></pre> <p>where Access token and Secret token will be used in s3fs configuration:</p> <pre><code>echo Access_token:Secret_token &gt; ~/.passwd-s3fs\n</code></pre> <p>That command will store the credentials into a file called passwd-s3fs. Since it starts with a dot, it will be invisible to the usual searches under Linux.</p> <p>The file will be created in the present directory, but you can also create it anywhere else, for instance, in /etc/ folder and the like.</p> <p>Change permissions of the newly created file</p> <pre><code>chmod 600 .passwd-s3fs\n</code></pre> <p>Code 600 means you can read and write the file or directory but that none of the other users on the local host will have access to it.</p>"},{"location":"datavolume/How-to-mount-object-storage-in-Linux-on-3Engines-Cloud.html.html#enable-3fs","title":"Enable 3fs\ud83d\udd17","text":"<p>Uncomment \u201cuser_allow_other\u201d in fuse.conf file as root</p> <pre><code>sudo nano /etc/fuse.conf\n</code></pre> <p>Now you are ready to mount your object storage to your Linux system. The command looks like:</p> <pre><code>s3fs w-container-1 /local/mount/point - passwd_file=~/.passwd-s3fs -o url=https://s3.waw3-1.3Engines.com -o use_path_request_style -o umask=0002 -o allow_other\n</code></pre>"},{"location":"datavolume/How-to-mount-object-storage-in-Linux-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>If you want to access s3 files without mounting to the local computer, use command s3cmd.</p> <p>How to access private object storage using S3cmd or boto3 on 3Engines Cloud</p>"},{"location":"datavolume/How-to-move-data-volume-between-two-VMs-using-OpenStack-Horizon-on-3Engines-Cloud.html.html","title":"How to move data volume between two VMs using OpenStack Horizon on 3Engines Cloud\ud83d\udd17","text":"<p>Volumes are used to store data and those data can be accessed from a virtual machine to which the volume is attached. To access data stored on a volume from another virtual machine, you need to disconnect that volume from virtual machine to which it is currently connected, and connect it to another instance.</p> <p>This article uses the Horizon dashboard to transfer volumes between virtual machines which are in the same project.</p>"},{"location":"datavolume/How-to-move-data-volume-between-two-VMs-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Source virtual machine and volume</p> <p>We assume that you have a virtual machine (which we will call source virtual machine) to which a volume is attached.</p> <p>No. 3 Destination virtual machine</p> <p>We also assume that you want to access the data stored on volume mentioned in Prerequisite No. 2 from another instance which is in the same project - we will call that instance destination virtual machine.</p>"},{"location":"datavolume/How-to-move-data-volume-between-two-VMs-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li> <p>Ensure that the transfer is possible</p> </li> <li> <p>Projects must be on the same cloud</p> </li> <li>Volume cannot be used for booting an operating system</li> <li>File system compatibility</li> <li>Making sure that the source virtual machine does not try to access the volume</li> <li>Other volume and instance conditions for successful transfer</li> <li>Shutting down the source virtual machine</li> <li>Shutting down the source virtual machine using Horizon dashboard</li> <li>Disconnecting volume</li> <li>Attaching volume to destination virtual machine</li> </ul> <p>Some parts of some screenshots in this article are greyed out for privacy reasons.</p>"},{"location":"datavolume/How-to-move-data-volume-between-two-VMs-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#ensure-that-the-transfer-is-possible","title":"Ensure that the transfer is possible\ud83d\udd17","text":"<p>Before the actual transfer, you have to examine the state of the volume and of the instances and conclude whether the transfer is possible right away or should you perform other operations first:</p>"},{"location":"datavolume/How-to-move-data-volume-between-two-VMs-using-OpenStack-Horizon-on-3Engines-Cloud.html.html#projects-must-be-on-the-same-cloud","title":"Projects must be on the same cloud\ud83d\udd17","text":"<p>If the projects are not on the same cloud, do not use this article but see one of these articles instead:</p>"},{"location":"datavolume/How-to-restore-volume-from-snapshot-on-3Engines-Cloud.html.html","title":"How to restore volume from snapshot on 3Engines Cloud\ud83d\udd17","text":"<p>In this article, you will learn how to restore volume from volume snapshot using Horizon dashboard or OpenStack CLI client.</p> <p>This can be achieved by creating a new volume from existing snapshot. You can then delete the previous snapshot and, optionally, previous volume.</p>"},{"location":"datavolume/How-to-restore-volume-from-snapshot-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with access to Horizon interface: https://horizon.3Engines.com</p> <p>No. 2 A volume snapshot</p> <p>You need to have a volume snapshot which you want to restore.</p> <p>No. 3 OpenStack CLI client</p> <p>If you want to interact with 3Engines Cloud cloud using the OpenStack CLI client, you need to have it installed. Check one of these articles:</p>"},{"location":"datavolume/Volume-snapshot-inheritance-and-its-consequences-on-3Engines-Cloud.html.html","title":"Volume snapshot inheritance and its consequences on 3Engines Cloud\ud83d\udd17","text":"<p>Performing a volume snapshot is a common form of securing your data against loss. There is nothing wrong with that, but you should remember what the consequences are.</p> <p>To illustrate the situation, we will present it on an example: We have created a volume called \u201cVolume A\u201d.</p> <p></p> <p>Next we create an \u201cSA\u201d snapshot from the \u201cVA\u201d volume.</p> <p></p> <p>From the OpenStack dashboard we can create new volumes \u201cVolume B\u201d and \u201cVolume C\u201d based on the previously created snapshot \u201cSnapshot A\u201d.</p> <p></p> <p>At the moment we have two new volumes which are based on the \u201cSnapshot A\u201d snapshot. Suppose we no longer need the volume called \u201cVolume A\u201d and we want to delete it.</p> <p></p> <p>Unfortunately, its deletion will not be possible directly because to delete a given volume, we have to delete its snapshots.</p> <p></p> <p>So we must first delete the snapshot \u201cSnapshot A\u201d and then the volume \u201cVolume A\u201d.</p> <p>However, this will also not be possible due to the fact that the \u201cSnapshot A\u201d snapshot is the source for 2 volumes \u201cVolume B\u201d and \u201cVolume C\u201d.</p> <p>To delete a volume from which snapshots volumes were created, we must also delete all snapshots of this volume.</p> <p>In conclusion, when creating new volumes from a snapshot, remember about inheritance. Snapshot \u201cSnapshot A\u201d is a parent for the volumes (children) \u201cVolume B\u201d and \u201cVolume C\u201d and if we want to delete the volume \u201cVolume A\u201d, we have to do it from the youngest generation (Volume B and Volume C).</p> <p>Backups are another solution and they do not create such bonds as snapshots and may exist even after the volume from which the backup was created has been deleted. Please see How to Backup an Instance and Download it to the Desktop on 3Engines Cloud OpenStack Hosting.</p>"},{"location":"datavolume/datavolume.html.html","title":"Data Volume Management","text":""},{"location":"datavolume/datavolume.html.html#available-documentation","title":"Available Documentation","text":"<ul> <li>How to attach a volume to VM less than 2TB on Linux on 3Engines Cloud</li> <li>How to attach a volume to VM more than 2TB on Linux on 3Engines Cloud</li> <li>Ephemeral vs Persistent storage option Create New Volume on 3Engines Cloud</li> <li>How to export a volume over NFS on 3Engines Cloud</li> <li>How to export a volume over NFS outside of a project on 3Engines Cloud</li> <li>How to extend the volume in Linux on 3Engines Cloud</li> <li>How to mount object storage in Linux on 3Engines Cloud</li> <li>How to move data volume between two VMs using OpenStack Horizon on 3Engines Cloud</li> <li>How many objects can I put into Object Storage container bucket on 3Engines Cloud</li> <li>How to create volume Snapshot and attach as Volume on Linux or Windows on 3Engines Cloud</li> <li>Volume snapshot inheritance and its consequences on 3Engines Cloud</li> <li>How to Create Backup of Your Volume From Windows Machine on 3Engines Cloud</li> <li>How To Attach Volume To Windows VM On 3Engines Cloud</li> <li>How to create or delete volume snapshot on 3Engines Cloud</li> <li>How to restore volume from snapshot on 3Engines Cloud</li> <li>Bootable versus non-bootable volumes on 3Engines Cloud</li> </ul>"},{"location":"kubernetes/Automatic-Kubernetes-cluster-upgrade-on-3Engines-Cloud-OpenStack-Magnum.html.html","title":"Automatic Kubernetes cluster upgrade on 3Engines Cloud OpenStack Magnum\ud83d\udd17","text":"<p>Warning</p> <p>Upgradeable cluster templates are available on 3Engines Cloud WAW4-1 region only at the moment of this writing.</p> <p>OpenStack Magnum clusters created in 3Engines Cloud can be automatically upgraded to the next minor Kubernetes version. This feature is available for clusters starting with version 1.29 of Kubernetes.</p> <p>In this article we demonstrate an upgrade of a Magnum Kubernetes cluster from version 1.29 to version 1.30.</p> <p>What are we going to cover</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html","title":"Autoscaling Kubernetes Cluster Resources on 3Engines Cloud OpenStack Magnum\ud83d\udd17","text":"<p>When autoscaling of Kubernetes clusters is turned on, the system can</p> <ul> <li>Add resources when the demand is high, or</li> <li>Remove unneeded resources when the demand is low and thus keep the costs down.</li> <li>The whole process can be automatic, helping the administrator concentrate on more important tasks at hand.</li> </ul> <p>This article explains various commands to resize or scale the cluster and will lead to a command to automatically create an autoscalable Kubernetes cluster for OpenStack Magnum.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Definitions of horizontal, vertical and nodes scaling</li> <li>Define autoscaling when creating the cluster in Horizon interface</li> <li>Define autoscaling when creating the cluster using the CLI</li> <li>Get cluster template labels from Horizon interface</li> <li>Get cluster template labels from the CLI</li> </ul>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Creating clusters with CLI</p> <p>The article How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum will introduce you to creation of clusters using a command line interface.</p> <p>No. 3 Connect openstack client to the cloud</p> <p>Prepare openstack and magnum clients by executing Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud from article How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon</p> <p>No. 4. Resizing Nodegroups</p> <p>Step 7 of article Creating Additional Nodegroups in Kubernetes Cluster on 3Engines Cloud OpenStack Magnum shows example of resizing the nodegroups for autoscaling.</p> <p>No. 5 Creating Clusters</p> <p>Step 2 of article How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum shows how to define master and worker nodes for autoscaling.</p> <p>There are three different autoscaling features that a Kubernetes cloud can offer:</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#horizontal-pod-autoscaler","title":"Horizontal Pod Autoscaler\ud83d\udd17","text":"<p>Scaling Kubernetes cluster horizontally means increasing or decreasing the number of running pods, depending on the actual demands at run time. Parameters to take into account are the usage of CPU and memory, as well as the desired minimum and maximum numbers of pod replicas.</p> <p>Horizontal scaling is also known as \u201cscaling out\u201d and is shorthened as HPA.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#vertical-pod-autoscaler","title":"Vertical Pod Autoscaler\ud83d\udd17","text":"<p>Vertical scaling (or \u201cscaling up\u201d, VPA) is adding or subtracting resources to and from an existing machine. If more CPUs are needed, add them. When they are not needed, shut some of them down.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#cluster-autoscaler","title":"Cluster Autoscaler\ud83d\udd17","text":"<p>HPA and VPA reorganize the usage of resources and the number of pods, however, there may come a time when the size of the system itself prevents from satisfying the demand. The solution is to autoscale the cluster itself, to increase or decrease the number of nodes on which the pods will run on.</p> <p>Once the number of nodes is adjusted, the pods and other resources need to rebalance themselves across the cluster, also automatically. The number of nodes acts as a physical barrier to the autoscaling of pods.</p> <p>All three models of autoscaling can be combined together.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#define-autoscaling-when-creating-a-cluster","title":"Define Autoscaling When Creating a Cluster\ud83d\udd17","text":"<p>You can define autoscaling parameters while defining a new cluster, using window called Size in the cluster creation wizard:</p> <p></p> <p>Specify a minimum and maximum number of worker nodes. If these values are 2 and 4 respectively, the cluster will have not less that 2 nodes and not more than 4 nodes at any time. If there is no traffic to the cluster, it will be automatically scaled to 2 nodes. In this example, the cluster can have 2, 3, or 4 nodes depending on the traffic.</p> <p>For the entire process of creating a Kubernetes cluster in Horizon, see Prerequisites No. 5.</p> <p>Warning</p> <p>If you decide to use NGINX Ingress option while defining a cluster, NGINX ingress will run as 3 replicas on 3 separate nodes. This will override the minimum number of nodes in Magnum autoscaler.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#autoscaling-node-groups-at-run-time","title":"Autoscaling Node Groups at Run Time\ud83d\udd17","text":"<p>The autoscaler in Magnum uses Node Groups. Node groups can be used to create workers with different flavors. The default-worker node group is automatically created when cluster is provisioned. Node groups have lower and upper limits of node count. This is the command to print them out for a given cluster:</p> <pre><code>openstack coe nodegroup show NoLoadBalancer default-worker -f json -c max_node_count -c node_count -c min_node_count\n</code></pre> <p>The result would be:</p> <pre><code>{\n \"node_count\": 1,\n \"max_node_count\": 2,\n \"min_node_count\": 1\n}\n</code></pre> <p>This works fine until you try to resize cluster beyond the limit set in node group. If you try to resize the above cluster to 12 nodes, like this:</p> <pre><code>openstack coe cluster resize NoLoadBalancer --nodegroup default-worker 12\n</code></pre> <p>you will get the following error:</p> <pre><code>Resizing default-worker outside the allowed range: min_node_count = 1, max_node_count = 2 (HTTP 400) (Request-ID: req-bbb09fc3-7df4-45c3-8b9b-fbf78d202ffd)\n</code></pre> <p>To resolve this error, change node_group max_node_count manually:</p> <pre><code>openstack coe nodegroup update NoLoadBalancer default-worker replace max_node_count=15\n</code></pre> <p>and then resize cluster to the desired value which was less that 15 in this example:</p> <p>openstack coe cluster resize NoLoadBalancer \u2013nodegroup default-worker 12</p> <p>If you repeat the first statement:</p> <pre><code>openstack coe nodegroup show NoLoadBalancer default-worker -f json -c max_node_count -c node_count -c min_node_count\n</code></pre> <p>the result will now be with a corrected value:</p> <pre><code> {\n \"node_count\": 12,\n \"max_node_count\": 15,\n \"min_node_count\": 1\n}\n</code></pre>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#how-autoscaling-detects-upper-limit","title":"How Autoscaling Detects Upper Limit\ud83d\udd17","text":"<p>The first version of Autoscaling would take the current upper limit of autoscaling in variable node_count and add 1 to it. If the command to create a cluster were</p> <pre><code>openstack coe cluster create mycluster --cluster-template mytemplate --node-count 8 --master-count 3\n</code></pre> <p>that version of Autoscaler would take the value of 9 (counting as 8 + 1). However, that procedure was limited to the default-worker node group only.</p> <p>The current Autoscaler can support multiple node groups by detecting the role of the node group:</p> <pre><code>openstack coe nodegroup show NoLoadBalancer default-worker -f json -c role\n</code></pre> <p>and the result is</p> <pre><code>{\n \"role\": \"worker\"\n}\n</code></pre> <p>As long as the role is worker and max_node_count is greater than 0, the Autoscaler will try to scale the default-worker node group by adding 1 to max_node_count.</p> <p>Attention</p> <p>Any additional node group must include concrete max_node_count attribute.</p> <p>See Prerequisites No. 4 for detailed examples of using the openstack coe nodegroup family of commands.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#autoscaling-labels-for-clusters","title":"Autoscaling Labels for Clusters\ud83d\udd17","text":"<p>There are three labels for clusters that influence autoscaling:</p> <ul> <li>auto_scaling_enabled \u2013 if true, it is enabled</li> <li>min_node_count \u2013 the minimal number of nodes</li> <li>max_node_count \u2013 the maximal number of nodes, at any time.</li> </ul> <p>When defining cluster through the Horizon interface, you are actually setting up these cluster labels.</p> <p></p> <p>List clusters with Container Infra =&gt; Cluster and click on the name of the cluster. Under Labels, you will find the current value for auto_scaling_enabled.</p> <p></p> <p>If true, it is enabled, the cluster will autoscale.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#create-new-cluster-using-cli-with-autoscaling-on","title":"Create New Cluster Using CLI With Autoscaling On\ud83d\udd17","text":"<p>The command to create a cluster with CLI must encompass all of the usual parameters as well as all of the labels needed for the cluster to function. The peculiarity of the syntax is that label parameters must be one single string, without any blanks inbetween.</p> <p>This is what one such command could look like:</p> <pre><code>openstack coe cluster create mycluster\n--cluster-template k8s-stable-1.23.5\n--keypair sshkey\n--master-count 1\n--node-count 3\n--labels auto_scaling_enabled=true,autoscaler_tag=v1.22.0,calico_ipv4pool_ipip=Always,cinder_csi_plugin_tag=v1.21.0,cloud_provider_enabled=true,cloud_provider_tag=v1.21.0,container_infra_prefix=registry-public.3Engines.com/magnum/,eodata_access_enabled=false,etcd_volume_size=8,etcd_volume_type=ssd,hyperkube_prefix=registry-public.3Engines.com/magnum/,k8s_keystone_auth_tag=v1.21.0,kube_tag=v1.21.5-rancher1,master_lb_floating_ip_enabled=true\n</code></pre> <p>If you just tried to copy and paste it into the terminal, you would get syntax errors. The end of the line is not allowed, the entire command must be one long string. To make your life easier, here is a version of the command that you can copy with success.</p> <p>Warning</p> <p>The line containing labels will be only partially visible on the screen, but once you paste it into the command line, the terminal software will execute it without problems.</p> <p>The command is:</p> <p>openstack coe cluster create mycluster \u2013cluster-template k8s-stable-1.23.5 \u2013keypair sshkey \u2013master-count 1 \u2013node-count 3 \u2013labels auto_scaling_enabled=true,autoscaler_tag=v1.22.0,calico_ipv4pool_ipip=Always,cinder_csi_plugin_tag=v1.21.0/,cloud_provider_enabled=true,cloud_provider_tag=v1.21.0,container_infra_prefix=registry-public.3Engines.com/magnum/,eodata_access_enabled=false,etcd_volume_size=8,etcd_volume_type=ssd,hyperkube_prefix=registry-public.3Engines.com/magnum/,k8s_keystone_auth_tag=v1.21.0,kube_tag=v1.21.5-rancher1,master_lb_floating_ip_enabled=true,min_node_count=2,max_node_count=4</p> <p>The name will be mycluster, one master node and three worker nodes in the beginning.</p> <p>Note</p> <p>It is mandatory to set up the maximal number of nodes in autoscaling. If not specified, the max_node_count will default to 0, and there will be no autoscaling at all for the particular nodegroup.</p> <p>This is the result after the creation:</p> <p></p> <p>Three worker node addresses are active: 10.0.0.102, 10.0.0.27, and 10.0.0.194.</p> <p>There is no traffic to the cluster so the autoscaling immediately kicked in. A minute or two after the creation was finished, the number of worker nodes fell down by one, to addresses 10.0.0.27 and 10.0.0.194 \u2013 that is autoscaling at work.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#nodegroups-with-worker-role-will-be-automatically-autoscalled","title":"Nodegroups With Worker Role Will Be Automatically Autoscalled\ud83d\udd17","text":"<p>Autoscaler automaticaly detects all new nodegroups with \u201cworker\u201d role assigned. The \u201cworker\u201d role is assigned by default if not specified. The maximum number of nodes must be specified as well.</p> <p>First see which nodegroups are present for cluster k8s-cluster. The command is</p> <pre><code>openstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role\n</code></pre> <p>Switch -c denotes which column to show, disregarding all other columns that are not listed in the command. You will see a table with columns name, node_count, status and role, which means that columns such as uuid, flavor_id and image_id will not take valueable space onscreen. The result is table with only the four columns that are relevant to adding nodegroupes with roles:</p> <p></p> <p>Now add and print a nodegroup without role:</p> <pre><code>openstack coe nodegroup create k8s-cluster nodegroup-without-role --node-count 1 --min-nodes 1 --max-nodes 5\n\nopenstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role\n</code></pre> <p>Since the role was not specified, a default value of \u201cworker\u201d was assigned to node group nodegroup-without-role. Since the system is set up to automatically autoscale nodegroups with worker role, if you add nodegroup without a role, it will autoscale.</p> <p></p> <p>Now add a node group called nodegroup-with-role and the name of the role will be custom:</p> <pre><code>openstack coe nodegroup create k8s-cluster nodegroup-with-role --node-count 1 --min-nodes 1 --max-nodes 5 --role custom\n\nopenstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role\n</code></pre> <p></p> <p>That will add a nodegroup but will not autoscale it on its own, as there is no worker role specified for the nodegroup.</p> <p>Finally, add a nodegroup called nodegroup-with-role-2 which will have two roles defined in one statement, that is, both custom and worker. Since at least one of the roles is worker, it will autoscale automatically.</p> <pre><code>openstack coe nodegroup create k8s-cluster nodegroup-with-role-2 --node-count 1 --min-nodes 1 --max-nodes 5 --role custom,worker\n\nopenstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role\n</code></pre> <p></p> <p>Cluster k8s-cluster now has 8 nodes:</p> <p></p> <p>You can delete these three clusters with the following set of commands:</p> <pre><code>openstack coe nodegroup delete k8s-cluster nodegroup-with-role\n\nopenstack coe nodegroup delete k8s-cluster nodegroup-with-role-2\n\nopenstack coe nodegroup delete k8s-cluster nodegroup-without-role\n</code></pre> <p>Once again, see the result:</p> <pre><code>openstack coe nodegroup list k8s-cluster -c name -c node_count -c status -c role\n</code></pre> <p></p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#how-to-obtain-all-labels-from-horizon-interface","title":"How to Obtain All Labels From Horizon Interface\ud83d\udd17","text":"<p>Use Container Infra =&gt; Clusters and click on the cluster name. You will get plain text in browser, just copy the rows under Labels and paste them to the text editor of your choice.</p> <p></p> <p>In text editor, manually remove line ends and make one string without breaks and carriage returns, then paste it back to the command.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#how-to-obtain-all-labels-from-the-cli","title":"How To Obtain All Labels From the CLI\ud83d\udd17","text":"<p>There is a special command which will produce labels from a cluster:</p> <pre><code>openstack coe cluster template show k8s-stable-1.23.5 -c labels -f yaml\n</code></pre> <p>This is the result:</p> <p></p> <p>That is yaml format, as specified by the -f parameter. The rows represent label values and your next action is to create one long string without line breaks as in the previous example, then form the CLI command.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#use-labels-string-when-creating-cluster-in-horizon","title":"Use Labels String When Creating Cluster in Horizon\ud83d\udd17","text":"<p>The long labels string can also be used when creating the cluster manually, i.e. from the Horizon interface. The place to insert those labels is described in Step 4 Define Labels in Prerequisites No. 2.</p>"},{"location":"kubernetes/Autoscaling-Kubernetes-Cluster-Resources-on-3Engines-Cloud-OpenStack-Magnum.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Autoscaling is similar to autohealing of Kubernetes clusters and both bring automation to the table. They also guarantee that the system will autocorrect as long as it is within its basic parameters. Use autoscaling of cluster resources as much as you can!</p>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html","title":"Backup of Kubernetes Cluster using Velero\ud83d\udd17","text":""},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#what-is-velero","title":"What is Velero\ud83d\udd17","text":"<p>Velero is the official open source project from VMware. It can back up all Kubernetes API objects and persistent volumes from the cluster on which it is installed. Backed up objects can be restored on the same cluster, or on a new one. Using a package like Velero is essential for any serious development in the Kubernetes cluster.</p> <p>In essence, you create object store under OpenStack, either using Horizon or Swift module of openstack command and then save cluster state into it. Restoring is the same in reverse \u2013 read from that object store and save it to a Kubernetes cluster.</p> <p>Velero has its own CLI command system so it is possible to automate creation of backups using cron jobs.</p>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Getting EC2 Client Credentials</li> <li>Adjusting \u201cvalues.yaml\u201d, the configuration file</li> <li>Creating namespace called velero for precise access to the Kubernetes cluster</li> <li>Installing Velero with a Helm chart</li> <li>Installing and deleting backups using Velero</li> <li>Example 1 Basics of Restoring an Application</li> <li>Example 2 Snapshot of Restoring an Application</li> </ul>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>The resources that you require and use will reflect on the state of your account wallet. Check your account statistics at https://portal.3Engines.com/.</p> <p>No. 2 How to Access Kubernetes cluster post-deployment</p> <p>We shall also assume that you have one or more Kubernetes clusters ready and accessible via a kubectl command:</p> <p>How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p> <p>The result of that article will be setting up of system variable KUBECONFIG, which points to the configuration file for access to the Kubernetes cloud. A typical command will be:</p> <pre><code>export KUBECONFIG=/home/username/Desktop/kubernetes/k8sdir/config\n</code></pre> <p>In case this is the first time you are using that particular config file, make it more secure by executing the following command as well:</p> <pre><code>chmod 600 /home/username/Desktop/kubernetes/k8sdir/config\n</code></pre> <p>No. 3 Handling Helm</p> <p>To install Velero, we shall use Helm:</p> <p>Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud.</p> <p>No. 4 An object storage S3 bucket available</p> <p>To create one, you can access object storage with Horizon interface or CLI.</p> Horizon commands How to use Object Storage on 3Engines Cloud. CLI You can also use command such as <pre><code>openstack container\n</code></pre> <p>to work with object storage. For more information see How to access object storage using OpenStack CLI on 3Engines Cloud</p> <p>Either way, we shall assume that there is a container called \u201cbucketnew\u201d:</p> <p></p> <p>Supply your own unique name while working through this article.</p>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#before-installing-velero","title":"Before Installing Velero\ud83d\udd17","text":"<p>We shall install Velero on Ubuntu 22.04; using other Linux distributions would be similar.</p> <p>Update and upgrade your Ubuntu environment:</p> <pre><code>sudo apt update &amp;&amp; sudo apt upgrade\n</code></pre> <p>It will be necessary to have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see Velero compatibility matrix.</p>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#installation-step-1-getting-ec2-client-credentials","title":"Installation step 1 Getting EC2 client credentials\ud83d\udd17","text":"<p>First fetch EC2 credentials from OpenStack. They are necessary to access private bucket (container). Generate them on your own by executing the following commands:</p> <pre><code>openstack ec2 credentials create\nopenstack ec2 credentials list\n</code></pre> <p>Save somewhere the Access Key and the Secret Key. They will be needed in the next step, in which you set up a Velero configuration file.</p>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#installation-step-2-adjust-the-configuration-file-valuesyaml","title":"Installation step 2 Adjust the configuration file - \u201cvalues.yaml\u201d\ud83d\udd17","text":"<p>Now create or adjust a configuration file for Velero. Use text editor of your choice to create that file. On MacOS or Linux, for example, you can use nano, like this:</p> <pre><code>sudo nano values.yaml\n</code></pre> <p>Use configuration file provided below. Fill in the required fields, which are marked with ##:</p> <p>values.yaml</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>initContainers:\n- name: velero-plugin-for-aws\n image: velero/velero-plugin-for-aws:v1.4.0\n imagePullPolicy: IfNotPresent\n volumeMounts:\n - mountPath: /target\n name: plugins\n\nconfiguration:\n provider: aws\n backupStorageLocation:\n provider: aws\n name: ## enter name of backup storage location (could be anything)\n bucket: ## enter name of bucket created in openstack\n default: true\n config:\n region: default\n s3ForcePathStyle: true\n s3Url: ## enter URL of object storage (for example \"https://s3.waw4-1.3Engines.com\")\ncredentials:\n secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.\n cloud: |\n [default]\n aws_access_key_id=\n aws_secret_access_key=\n ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.\nsnapshotsEnabled: false\ndeployRestic: true\nrestic:\n podVolumePath: /var/lib/kubelet/pods\n privileged: true\nschedules:\n mybackup:\n disabled: false\n schedule: \"0 6,18 * * *\" ## choose time, when scheduled backups will be make.\n template:\n ttl: \"240h\" ## choose ttl, after which the backups will be removed.\n snapshotVolumes: false\n</code></pre> <pre><code>initContainers:\n- name: velero-plugin-for-aws\n image: velero/velero-plugin-for-aws:v1.4.0\n imagePullPolicy: IfNotPresent\n volumeMounts:\n - mountPath: /target\n name: plugins\n\nconfiguration:\n provider: aws\n backupStorageLocation:\n provider: aws\n name: ## enter name of backup storage location (could be anything)\n bucket: ## enter name of bucket created in openstack\n default: true\n config:\n region: waw3-1\n s3ForcePathStyle: true\n s3Url: ## enter URL of object storage (for example \"https://s3.waw3-1.3Engines.com\")\ncredentials:\n secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.\n cloud: |\n [default]\n aws_access_key_id=\n aws_secret_access_key=\n ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.\nsnapshotsEnabled: false\ndeployRestic: true\nrestic:\n podVolumePath: /var/lib/kubelet/pods\n privileged: true\nschedules:\n mybackup:\n disabled: false\n schedule: \"0 6,18 * * *\" ## choose time, when scheduled backups will be make.\n template:\n ttl: \"240h\" ## choose ttl, after which the backups will be removed.\n snapshotVolumes: false\n</code></pre> <pre><code>initContainers:\n- name: velero-plugin-for-aws\n image: velero/velero-plugin-for-aws:v1.4.0\n imagePullPolicy: IfNotPresent\n volumeMounts:\n - mountPath: /target\n name: plugins\n\nconfiguration:\n provider: aws\n backupStorageLocation:\n provider: aws\n name: ## enter name of backup storage location (could be anything)\n bucket: ## enter name of bucket created in openstack\n default: true\n config:\n region: default\n s3ForcePathStyle: true\n s3Url: ## enter URL of object storage (for example \"https://s3.waw3-2.3Engines.com\")\ncredentials:\n secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.\n cloud: |\n [default]\n aws_access_key_id=\n aws_secret_access_key=\n ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.\nsnapshotsEnabled: false\ndeployRestic: true\nrestic:\n podVolumePath: /var/lib/kubelet/pods\n privileged: true\nschedules:\n mybackup:\n disabled: false\n schedule: \"0 6,18 * * *\" ## choose time, when scheduled backups will be make.\n template:\n ttl: \"240h\" ## choose ttl, after which the backups will be removed.\n snapshotVolumes: false\n</code></pre> <pre><code>initContainers:\n- name: velero-plugin-for-aws\n image: velero/velero-plugin-for-aws:v1.4.0\n imagePullPolicy: IfNotPresent\n volumeMounts:\n - mountPath: /target\n name: plugins\n\nconfiguration:\n provider: aws\n backupStorageLocation:\n provider: aws\n name: ## enter name of backup storage location (could be anything)\n bucket: ## enter name of bucket created in openstack\n default: true\n config:\n region: default\n s3ForcePathStyle: true\n s3Url: ## enter URL of object storage (for example \"https://s3.fra1-2.3Engines.com\")\ncredentials:\n secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.\n cloud: |\n [default]\n aws_access_key_id=\n aws_secret_access_key=\n ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.\nsnapshotsEnabled: false\ndeployRestic: true\nrestic:\n podVolumePath: /var/lib/kubelet/pods\n privileged: true\nschedules:\n mybackup:\n disabled: false\n schedule: \"0 6,18 * * *\" ## choose time, when scheduled backups will be make.\n template:\n ttl: \"240h\" ## choose ttl, after which the backups will be removed.\n snapshotVolumes: false\n</code></pre> <p>Paste the content to the configuration file values.yaml and save.</p> <p>Example of an already configured file:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>initContainers:\n- name: velero-plugin-for-aws\n image: velero/velero-plugin-for-aws:v1.4.0\n imagePullPolicy: IfNotPresent\n volumeMounts:\n - mountPath: /target\n name: plugins\n\nconfiguration:\n provider: aws\n backupStorageLocation:\n provider: aws\n name: velerobackupnew\n bucket: bucketnew\n default: true\n config:\n region: default\n s3ForcePathStyle: true\n s3Url: https://s3.waw4-1.3Engines.com\ncredentials:\n secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.\n cloud: |\n [default]\n aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x\n aws_secret_access_key= dee1581dac214d3dsa34037e826f9148\n ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.\nsnapshotsEnabled: false\ndeployRestic: true\nrestic:\n podVolumePath: /var/lib/kubelet/pods\n privileged: true\nschedules:\n mybackup:\n disabled: false\n schedule: \"0 * * *\"\n template:\n ttl: \"168h\"\n snapshotVolumes: false\n</code></pre> <pre><code>initContainers:\n- name: velero-plugin-for-aws\n image: velero/velero-plugin-for-aws:v1.4.0\n imagePullPolicy: IfNotPresent\n volumeMounts:\n - mountPath: /target\n name: plugins\n\nconfiguration:\n provider: aws\n backupStorageLocation:\n provider: aws\n name: velerobackupnew\n bucket: bucketnew\n default: true\n config:\n region: waw3-1\n s3ForcePathStyle: true\n s3Url: https://s3.waw3-1.3Engines.com\ncredentials:\n secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.\n cloud: |\n [default]\n aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x\n aws_secret_access_key= dee1581dac214d3dsa34037e826f9148\n ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.\nsnapshotsEnabled: false\ndeployRestic: true\nrestic:\n podVolumePath: /var/lib/kubelet/pods\n privileged: true\nschedules:\n mybackup:\n disabled: false\n schedule: \"0 * * *\"\n template:\n ttl: \"168h\"\n snapshotVolumes: false\n</code></pre> <pre><code>initContainers:\n- name: velero-plugin-for-aws\n image: velero/velero-plugin-for-aws:v1.4.0\n imagePullPolicy: IfNotPresent\n volumeMounts:\n - mountPath: /target\n name: plugins\n\nconfiguration:\n provider: aws\n backupStorageLocation:\n provider: aws\n name: velerobackupnew\n bucket: bucketnew\n default: true\n config:\n region: default\n s3ForcePathStyle: true\n s3Url: https://s3.waw3-2.3Engines.com\ncredentials:\n secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.\n cloud: |\n [default]\n aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x\n aws_secret_access_key= dee1581dac214d3dsa34037e826f9148\n ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.\nsnapshotsEnabled: false\ndeployRestic: true\nrestic:\n podVolumePath: /var/lib/kubelet/pods\n privileged: true\nschedules:\n mybackup:\n disabled: false\n schedule: \"0 * * *\"\n template:\n ttl: \"168h\"\n snapshotVolumes: false\n</code></pre> <pre><code>initContainers:\n- name: velero-plugin-for-aws\n image: velero/velero-plugin-for-aws:v1.4.0\n imagePullPolicy: IfNotPresent\n volumeMounts:\n - mountPath: /target\n name: plugins\n\nconfiguration:\n provider: aws\n backupStorageLocation:\n provider: aws\n name: velerobackupnew\n bucket: bucketnew\n default: true\n config:\n region: default\n s3ForcePathStyle: true\n s3Url: https://s3.fra1-2.3Engines.com\ncredentials:\n secretContents: ## enter access and secret key to ec2 bucket. This configuration will create kubernetes secret.\n cloud: |\n [default]\n aws_access_key_id= c4b4ee62a18f4e0ba23f71629d2038e1x\n aws_secret_access_key= dee1581dac214d3dsa34037e826f9148\n ##existingSecret: ## If you want to use existing secret, created from sealed secret, then use this variable and omit credentials.secretContents.\nsnapshotsEnabled: false\ndeployRestic: true\nrestic:\n podVolumePath: /var/lib/kubelet/pods\n privileged: true\nschedules:\n mybackup:\n disabled: false\n schedule: \"0 * * *\"\n template:\n ttl: \"168h\"\n snapshotVolumes: false\n</code></pre>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#installation-step-3-creating-namespace","title":"Installation step 3 Creating namespace\ud83d\udd17","text":"<p>Velero must be installed in an eponymous namespace, velero. This is the command to create it:</p> <pre><code>kubectl create namespace velero\nnamespace/velero created\n</code></pre>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#installation-step-4-installing-velero-with-a-helm-chart","title":"Installation step 4 Installing Velero with a Helm chart\ud83d\udd17","text":"<p>Here are the commands to install Velero by means of a Helm chart:</p> <pre><code>helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts\n</code></pre> <p>The output is:</p> <pre><code>\"vmware-tanzu\" has been added to your repositories\n</code></pre> <p>The following command will install velero onto the cluster:</p> <pre><code>helm install vmware-tanzu/velero --namespace velero --version 2.28 -f values.yaml --generate-name\n</code></pre> <p>The output will look like this:</p> <p></p> <p>To see the version of Velero that is actually installed, use:</p> <pre><code>helm list --namespace velero\n</code></pre> <p>Note the name used, velero-1721031498, and we are going to use it in the rest of the article. In your case, note the correct velero name and swap value of 1721031498 with it.</p> <p>Here is how to check that Velero is up and running:</p> <pre><code>kubectl get deployment/velero-1721031498 -n velero\n</code></pre> <p>The output will be similar to this:</p> <pre><code>NAME READY UP-TO-DATE AVAILABLE AGE\nvelero-1721031498 1/1 1 1 5m30s\n</code></pre> <p>Check that the secret has been created:</p> <pre><code>kubectl get secret/velero-1721031498 -n velero\n</code></pre> <p>The result is:</p> <pre><code>NAME TYPE DATA AGE\nvelero-1721031498 Opaque 1 3d1h\n</code></pre>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#installation-step-5-installing-velero-cli","title":"Installation step 5 Installing Velero CLI\ud83d\udd17","text":"<p>The final step is to install Velero CLI \u2013 Command Line Interface suitable for working from the terminal window on your operating system.</p> <p>Download the client specified for your operating system from: https://github.com/vmware-tanzu/velero/releases, using wget. Here we are downloading version</p> <p>velero-v1.9.1-linux-amd64.tar.gz</p> <p>but it is recommended to download the latest version. In that case, change the name of the tar.gz file accordingly.</p> <pre><code>wget https://github.com/vmware-tanzu/velero/releases/download/v1.9.1/velero-v1.9.1-linux-amd64.tar.gz\n</code></pre> <p>Extract the tarball:</p> <pre><code>tar -xvf velero-v1.9.1-linux-amd64.tar.gz\n</code></pre> <p>This is the expected result:</p> <pre><code>velero-v1.9.1-linux-amd64/LICENSE\nvelero-v1.9.1-linux-amd64/examples/README.md\nvelero-v1.9.1-linux-amd64/examples/minio\nvelero-v1.9.1-linux-amd64/examples/minio/00-minio-deployment.yaml\nvelero-v1.9.1-linux-amd64/examples/nginx-app\nvelero-v1.9.1-linux-amd64/examples/nginx-app/README.md\nvelero-v1.9.1-linux-amd64/examples/nginx-app/base.yaml\nvelero-v1.9.1-linux-amd64/examples/nginx-app/with-pv.yaml\nvelero-v1.9.1-linux-amd64/velero\n</code></pre> <p>Move the extracted velero binary to somewhere in your $PATH (/usr/local/bin for most users):</p> <pre><code>cd velero-v1.9.1-linux-amd64\n# System might force using sudo\nsudo mv velero /usr/local/bin\n# check if velero is working\nvelero version\n</code></pre> <p></p> <p>After these operations, you should be allowed to use velero commands. For help how to use them, execute:</p> <pre><code>velero help\n</code></pre>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#working-with-velero","title":"Working with Velero\ud83d\udd17","text":"<p>So far, we have</p> <ul> <li>created an object store named \u201cbucketnew\u201d and</li> <li>told velero to use it through the bucket: parameter in values.yaml file.</li> </ul> <p>Velero will create another object store called backups under \u201cbucketnew\u201d and then continue creating object stores for particular backups. For example, the following command will add object store called mybackup2:</p> <pre><code>velero backup create mybackup2\nBackup request \"mybackup2\" submitted successfully.\n</code></pre> <p>Here is what it will look like in Horizon:</p> <p></p> <p>Let us add two other backups. The first should backup all api objects in namespace velero:</p> <pre><code>velero backup create mybackup3 --include-namespaces velero\n</code></pre> <p>The second will backup all api objects in default namespace</p> <pre><code>velero backup create mybackup5 --include-namespaces default\nBackup request \"mybackup4\" submitted successfully.\n</code></pre> <p>This the object store structure after these three backups:</p> <p></p> <p>You can also use velero CLI command to list the existing backups:</p> <pre><code>velero backup get\n</code></pre> <p>This is the result in terminal window:</p> <p></p>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#example-1-basics-of-restoring-an-application","title":"Example 1 Basics of Restoring an Application\ud83d\udd17","text":"<p>Let us now demonstrate how to restore a Kubernetes application. Let us first clone one example app from GitHub. Execute this:</p> <pre><code>git clone https://github.com/vmware-tanzu/velero.git\nCloning into 'velero'...\nResolving deltas: 100% (27049/27049), done.\ncd velero\n</code></pre> <p>Start the sample nginx app:</p> <pre><code>kubectl apply -f examples/nginx-app/base.yaml\nkubectl apply -f base.yaml\nnamespace/nginx-example unchanged\ndeployment.apps/nginx-deployment unchanged\nservice/my-nginx unchanged\n</code></pre> <p>Create a backup:</p> <pre><code>velero backup create nginx-backup --include-namespaces nginx-example\nBackup request \"nginx-backup\" submitted successfully.\n</code></pre> <p>This is what the backup of nginx-backup looks like in Horizon:</p> <p></p> <p>Simulate a disaster:</p> <pre><code>kubectl delete namespaces nginx-example\n# Wait for the namespace to be deleted\nnamespace \"nginx-example\" deleted\n</code></pre> <p>Restore your lost resources:</p> <pre><code>velero restore create --from-backup nginx-backup\nRestore request \"nginx-backup-20220728013338\" submitted successfully.\nRun `velero restore describe nginx-backup-20220728013338` or `velero restore logs nginx-backup-20220728013338` for more details.\n\nvelero backup get\nNAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR\nbackup New 0 0 &lt;nil&gt; n/a &lt;none&gt;\nnginx-backup New 0 0 &lt;nil&gt; n/a &lt;none&gt;\n</code></pre>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#example-2-snapshot-of-restoring-an-application","title":"Example 2 Snapshot of restoring an application\ud83d\udd17","text":"<p>Start the sample nginx app:</p> <pre><code>kubectl apply -f examples/nginx-app/with-pv.yaml\nnamespace/nginx-example created\npersistentvolumeclaim/nginx-logs created\ndeployment.apps/nginx-deployment created\nservice/my-nginx created\n</code></pre> <p>Create a backup with PV snapshotting:</p> <pre><code>velero backup create nginx-backup-vp --include-namespaces nginx-example\nBackup request \"nginx-backup\" submitted successfully.\nRun `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.\n</code></pre> <p>Simulate a disaster:</p> <pre><code>kubectl delete namespaces nginx-example\nnamespace \"nginx-example\" deleted\n</code></pre> <p>Important</p> <p>Because the default reclaim policy for dynamically-provisioned PVs is \u201cDelete\u201d, these commands should trigger your cloud provider to delete the disk that backs up the PV. Deletion is asynchronous, so this may take some time.</p> <p>Restore your lost resources:</p> <pre><code>velero restore create --from-backup nginx-backup-vp\nRestore request \"nginx-backup-20220728015234\" submitted successfully.\nRun `velero restore describe nginx-backup-20220728015234` or `velero restore logs nginx-backup-20220728015234` for more details.\n</code></pre>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#delete-a-velero-backup","title":"Delete a Velero backup\ud83d\udd17","text":"<p>There are two ways to delete a backup made by Velero.</p> Delete backup custom resource only <p>``` kubectl delete backup -n <p>```</p> <p>will delete the backup custom resource only and will not delete any associated data from object/block storage</p> Delete all data in object/block storage <p>``` velero backup delete <p>```</p> <p>will delete the backup resource including all data in object/block storage</p>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#removing-velero-from-the-cluster","title":"Removing Velero from the cluster\ud83d\udd17","text":""},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#uninstall-velero","title":"Uninstall Velero\ud83d\udd17","text":"<p>To uninstall Velero release:</p> <pre><code>helm uninstall velero-1721031498 --namespace velero\n</code></pre>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#to-delete-velero-namespace","title":"To delete Velero namespace\ud83d\udd17","text":"<pre><code>kubectl delete namespace velero\n</code></pre>"},{"location":"kubernetes/Backup-of-Kubernetes-Cluster-using-Velero.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Now that Velero is up and running, you can integrate it into your routine. It will be useful in all classical backups scenarios \u2013 for disaster recovery, cluster and namespace migration, testing and development, application rollbacks, compliance and auditing and so on. Apart from these broad use cases, Velero will help with specific Kubernetes cluster tasks for backing up, such as:</p> <ul> <li>backing up and restoring deployments, service, config maps and secrets,</li> <li>selective backups, say, only for specific namespaces or label selectors,</li> <li>volume shapshots using cloud provider APIs (AWS, Azure, GCP etc.)</li> <li>snapshots of persistent volumes for point-in-time recovery</li> <li>saving backup data to AWS S3, Google Cloud Storage, Azure Blob Storage etc.</li> <li>integration with kubectl command so that Custom Resource Definitions (CRDs) are used to define backup and restore configuration.</li> </ul>"},{"location":"kubernetes/CICD-pipelines-with-GitLab-on-3Engines-Cloud-Kubernetes-building-a-Docker-image.html.html","title":"CI/CD pipelines with GitLab on 3Engines Cloud Kubernetes - building a Docker image\ud83d\udd17","text":"<p>GitLab provides an isolated, private code registry and space for collaboration on code by teams. It also offers a broad range of code deployment automation capabilities. In this article, we will explain how to automate building a Docker image of your app.</p>"},{"location":"kubernetes/CICD-pipelines-with-GitLab-on-3Engines-Cloud-Kubernetes-building-a-Docker-image.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Add your public key to GitLab and access GitLab from your command line</li> <li>Create project in GitLab and add sample application code</li> <li>Define environment variables with your DockerHub coordinates in GitLab</li> <li>Create pipeline to build your app\u2019s Docker image using Kaniko</li> <li>Trigger pipeline build</li> </ul>"},{"location":"kubernetes/CICD-pipelines-with-GitLab-on-3Engines-Cloud-Kubernetes-building-a-Docker-image.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Kubernetes cluster</p> <p>How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p> <p>No. 3 Local version of GitLab available</p> <p>Your local instance of GitLab is available and properly accessible by your GitLab user.</p> <p>In this article we assume the setup according to this article Install GitLab on 3Engines Cloud Kubernetes. If you use a different instance of GitLab, there can be some differences e.g. where certain functionalities are located in the GUI.</p> <p>In this article, we shall be using gitlab.mysampledomain.info as the gitlab instance. Be sure to replace it with your own domain.</p> <p>No. 4 git CLI operational</p> <p>git command installed locally. You may use it with GitHub, GitLab and other source control platforms based on git.</p> <p>No. 5 Account at DockerHub</p> <p>Access to your DockerHub (or another container image registry).</p> <p>No. 6 Using Kaniko</p> <p>kaniko is a tool to build container images based on a provided Dockerfile. For more elaborate overview of kaniko refer to its documentation.</p> <p>No. 7 Private and public keys available</p> <p>To connect to our GitLab instance we need a combination of a private and a public key. You can use any key pair, one option is to use OpenStack Horizon to create one. For reference see:</p> <p>See How to create key pair in OpenStack Dashboard on 3Engines Cloud</p> <p>Here, we use the key pair to connect to GitLab instance that we previously installed in Prerequisite No. 3.</p>"},{"location":"kubernetes/CICD-pipelines-with-GitLab-on-3Engines-Cloud-Kubernetes-building-a-Docker-image.html.html#step-1-add-your-public-key-to-gitlab-and-access-gitlab-from-your-command-line","title":"Step 1 Add your public key to GitLab and access GitLab from your command line\ud83d\udd17","text":"<p>In order to access your GitLab instance from the command line, GitLab uses SSH-based authentication. To ensure your console uses these keys for authentication by default, ensure your keys are stored in the ~/.ssh folder and are called id_rsa (private key) and id_rsa.pub (public key).</p> <p>The public key should then be added to the authorized keys in GitLab GUI. To add the public key, click on your avatar icon:</p> <p></p> <p>Then scroll to \u201cPreferences\u201d, choose \u201cSSH Keys\u201d from the left menu and paste the contents of your public key into the \u201cKey\u201d field.</p> <p></p> <p>If the GitLab instance you are using is hosted, say, on domain mysampledomain.info, you can use a command like this</p> <pre><code>ssh -T [email\u00a0protected]\n</code></pre> <p>to verify that you have access to GitLab from CLI interface.</p> <p>You should see an output similar to the following:</p> <p></p>"},{"location":"kubernetes/CICD-pipelines-with-GitLab-on-3Engines-Cloud-Kubernetes-building-a-Docker-image.html.html#step-2-create-project-in-gitlab-and-add-sample-application-code","title":"Step 2 Create project in GitLab and add sample application code\ud83d\udd17","text":"<p>We will first add a sample application in GitLab. This is a minimal Python-Flask application, its code can be downloaded from this 3Engines Cloud GitHub repository accompanying this Knowledge Base.</p> <p>As a first step in this section, we will initiate the GitLab remote origin. Login to GitLab GUI and enter the default screen, click on button \u201cNew Project\u201d, then \u201cCreate blank project\u201d. It will transfer you to the view below.</p> <p></p> <p>In that view, project URL will be pre-filled and corresponding to the URL of your GitLab instance. In the place denoted with a red rectangle, you should enter your user name; usually, it will be root but can be anything else. If there already are some users defined in GitLab, their names will appear in a drop-down menu.</p> <p></p> <p>Enter your preferred project name and slug, in our case \u201cGitLabCI Sample\u201d and \u201cGitLabCI-sample\u201d, respectively. Choose the visibility level to your preference. Uncheck box \u201cInitialize repository with a README\u201d, because we will initiate the repository from the existing code. (We are not initializing the repo, we are only establishing the project in the origin.)</p> <p>After submitting the \u201cCreate project\u201d form, you will receive a list of commands to work with your repo. Review them and switch to the CLI. Clone the entire 3Engines K8s samples repo, then extract the sub-folder called HelloWorld-Docker-image-Flask. For clarity, we rename its contents to a new folder, GitLabCI-sample. Use</p> <pre><code>mkdir ~/GitLabCI-sample\n</code></pre> <p>if this is the first time you are working through this article, so the folder would be ready for the following set of commands:</p> <pre><code>git clone https://github.com/3Engines/K8s-samples\nmv ~/K8s-samples/HelloWorld-Docker-image-Flask/* ~/GitLabCI-sample\nrm K8s-samples/ -rf\n</code></pre> <p>After the above sequence of steps, we have folder GitLabCI-sample with 3 files:</p> <ul> <li>app.py which is our Python Flask application code,</li> <li>a Dockerfile and</li> <li>the dependencies file requirements.txt.</li> </ul> <p>We can then cd into this folder, initialize git repo, commit locally and push to the remote with the following commands (replace domain and username):</p> <pre><code>cd GitLabCI-sample\ngit init\ngit remote add origin [email\u00a0protected]:myusername/GitLabCI-sample.git\ngit add .\ngit commit -m \"First commit\"\ngit push origin master\n</code></pre> <p>Most likely, the user name myusername here will be just root.</p> <p>When we enter GitLab GUI, we can see that our changes are committed:</p> <p></p>"},{"location":"kubernetes/CICD-pipelines-with-GitLab-on-3Engines-Cloud-Kubernetes-building-a-Docker-image.html.html#step-3-define-environment-variables-with-your-dockerhub-coordinates-in-gitlab","title":"Step 3 Define environment variables with your DockerHub coordinates in GitLab\ud83d\udd17","text":"<p>We want to create a CI/CD pipeline that will, upon a new commit, build a Docker image of our app and push it to Docker Hub container registry. Let us use environment variables in GitLab to enable connection to the Docker registry. Use the following keys and values:</p> <pre><code>CI_COMMIT_REF_SLUG=latest\nCI_REGISTRY=https://index.docker.io/v1/\nCI_REGISTRY_IMAGE=index.docker.io/yourdockerhubuser/gitlabci-sample\nCI_REGISTRY_USER=yourdockerhubuser\nCI_REGISTRY_PASSWORD=yourdockerhubrepo\n</code></pre> <p>The first two, CI_COMMIT_REF_SLUG and CI_REGISTRY are hardcoded for DockerHub. The other three are:</p> CI_REGISTRY_IMAGE The name of Docker image to be created. Enter your user name for Docker Hub site (yourdockerhubuser). If, for instance, the user name is paultur, the image in Docker registry will be /paultur/gitlabci-sample, as seen at the end of this article. CI_REGISTRY_USER Enter yourdockerhubuser which, again, is your user name in Docker Hub. CI_REGISTRY_PASSWORD <p>Enter * yourdockerhubrepo, which can be your account password or a specially created access token. To create one such token, see option Account Settings \u2013&gt; Security in Docker site:</p> <p></p> <p>Back to GitLab UI, from menu Settings in project view, go to CI/CD submenu:</p> <p></p> <p>Scroll down to the section \u201cVariables\u201dand fill in the respective forms. In the GUI, this will look similar to this:</p> <p></p> <p>Now that the values of variables are set up, we will use them in our CI/CD pipeline.</p>"},{"location":"kubernetes/CICD-pipelines-with-GitLab-on-3Engines-Cloud-Kubernetes-building-a-Docker-image.html.html#step-4-create-a-pipeline-to-build-your-apps-docker-image-using-kaniko","title":"Step 4 Create a pipeline to build your app\u2019s Docker image using Kaniko\ud83d\udd17","text":"<p>The CI/CD pipeline that we are creating in GitLab will have only one job that</p> <ul> <li>builds the image and</li> <li>pushes it to the Docker image registry.</li> </ul> <p>In real life scenarios, pipelines would also include additional jobs e.g. related to unit or integration tests.</p> <p>GitLab recognizes that a repository/project is configured to implement a CI/CD pipeline by the presence of the .gitlab-ci.yml file at the root of the project. One could apply the CI/CD to the project also from GitLab GUI (CI/CD menu entry \u2192 Pipelines), using one of the provided default templates. However the result will be, similarly, adding a specifically configured .gitlab-ci.yml file to the root of the project.</p> <p>Now create now a .gitlab-ci.yml file with the contents as below and place it into the folder GitLabCI-sample. The file contains the configuration of our pipeline and defines a single job called docker_image_build.</p> <p>.gitlab-ci.yml</p> <pre><code>docker_image_build:\n image:\n name: gcr.io/kaniko-project/executor:v1.14.0-debug\n entrypoint: [\"\"]\n script:\n - echo \"{\\\"auths\\\":{\\\"${CI_REGISTRY}\\\":{\\\"auth\\\":\\\"$(printf \"%s:%s\" \"${CI_REGISTRY_USER}\" \"${CI_REGISTRY_PASSWORD}\" | base64 | tr -d '\\n')\\\" }}}\" &gt; /kaniko/.docker/config.json\n - &gt;-\n /kaniko/executor\n --context \"${CI_PROJECT_DIR}\"\n --cache=false\n --dockerfile \"${CI_PROJECT_DIR}/Dockerfile\"\n --destination \"${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}\"\n</code></pre> <p>When changes to our project are committed to GitLab, the CI/CD pipeline is triggered to run automatically.</p> <p>The jobs are executed by GitLab runner. If you are using GitLab instance by following Prerequisite No. 3 Local version of GitLab available, the default runner will have already been deployed in the cluster. In this case, the runner deploys a short-lived pod dedicated to running this specific pipeline. One of the containers running in the pod is based on Kaniko image and is used to build the Docker image of our app.</p> <p>There are two key commands in the script key and they run when the Kaniko container starts. Both will take values after the environment variables we have previously entered into GitLab.</p> Fill in and save the contents of a standardized configuration file The first command fills in and saves the contents of config.json, which is a standardized configuration file used for authenticating to DockerHub. Build and publish the container image to DockerHub The second command builds and publishes the container image to DockerHub."},{"location":"kubernetes/CICD-pipelines-with-GitLab-on-3Engines-Cloud-Kubernetes-building-a-Docker-image.html.html#step-5-trigger-pipeline-build","title":"Step 5 Trigger pipeline build\ud83d\udd17","text":"<p>A commit triggers the pipeline to run. After adding the file, publish changes to the repository with the following set of commands:</p> <pre><code>git add .\ngit commit -m \"Add .gitlab-ci.yml\"\ngit push origin master\n</code></pre> <p>After this commit, if we switch to CI/CD screen of our project, we should see that the pipeline first is in running status, and completed afterwards:</p> <p></p> <p>Also when browsing our Docker registry, the image is published:</p> <p></p>"},{"location":"kubernetes/CICD-pipelines-with-GitLab-on-3Engines-Cloud-Kubernetes-building-a-Docker-image.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Add your unit and integration tests to this pipeline. They can be added as additional steps in the gitlab-ci.yml file. A complete reference can be found here: https://docs.gitlab.com/ee/ci/yaml/</p>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html","title":"Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on 3Engines Cloud\ud83d\udd17","text":"<p>This guide explains how to configure IP whitelisting (allowed_cidrs) on an existing OpenStack Load Balancer using Horizon and CLI commands. The configuration will limit access to your cluster through load balancer.</p>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Prepare Your Environment</li> <li>Whitelist the load balancer via the CLI</li> </ul>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 List of IP addresses/ranges to whitelist</p> <p>This is the list of IP addresses that you want the load balancer to be able to listen to.</p> <p>In this article, we will use the following two addresses to whitelist:</p> <ul> <li>10.0.0.0/8</li> <li>10.95.255.0/24</li> </ul> <p>No. 3 Python Octavia Client</p> <p>To operate Load Balancers with CLI, the Python Octavia Client (python-octaviaclient) is required. It is a command-line client for the OpenStack Load Balancing service. Install the load-balancer (Octavia) plugin with the following command from the Terminal window, on Ubuntu 22.04:</p> <pre><code>pip install python-octaviaclient\n</code></pre> <p>Or, if you have virtualenvwrapper installed:</p> <pre><code>mkvirtualenv python-octaviaclient\npip install python-octaviaclient\n</code></pre>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#prepare-your-environment","title":"Prepare Your Environment\ud83d\udd17","text":"<p>First of all, you have to find id of your load balancer and its listener.</p>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#horizon","title":"Horizon:\ud83d\udd17","text":"<p>To find a load balancer id, go to Project &gt;&gt; Network &gt;&gt; Load Balancers and find that one which is associated with your cluster (its name will be with prefix of your cluster name).</p> <p></p> <p>Click on load balancer name (in this case <code>lb-testing-ih347dstxyl2-api_lb_fixed-w2im3obvdv2p-loadbalancer_with_flavor-ykcmf6vvphld</code>) then go to Listeners pane. There you will have a listener associated with that load balancer.</p> <p></p>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#cli","title":"CLI\ud83d\udd17","text":"<p>To use CLI to find the listener, you have to know the following two cluster parameters:</p> <ul> <li>Stack ID</li> <li>Cluster ID</li> </ul> <p>You can find them from Horizon commands Container Infra \u2013&gt; Clusters and then click on the name of the cluster:</p> <p></p> <p>At the bottom of the window, find the Stack ID:</p> <p></p> <p>Now execute the commands:</p> <pre><code>openstack coe cluster show &lt;your_cluster_id&gt; \\\n-f value -c stack_id \\\n&lt;stack_id for example 12345678-1234-1234-1234-123456789011&gt;\n</code></pre> <p>To find LB_ID</p> <pre><code>openstack stack resource list &lt;your_stack_id&gt; \\\n-n 5 -c resource_name -c physical_resource_id \\\n| grep loadbalancer_with_flavor \\\n| loadbalancer_with_flavor \\\n| &lt;flavor_id for example 12345678-1234-1234-1234-123456789011&gt;\n</code></pre> <p>With that information, now we can check our listener_id; it is to this component that we will attach the whitelist:</p> <pre><code>openstack loadbalancer \\\nshow 2d6b335f-fb05-4496-8593-887f7e2c49cf \\\n-c listeners \\\n-f value \\\n&lt;listener_id for example 12345678-1234-1234-1234-123456789011&gt;\n</code></pre>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#whitelist-the-load-balancer-via-the-cli","title":"Whitelist the load balancer via the CLI\ud83d\udd17","text":"<p>We now have the listener and the IP addresses which will be whitelisted. This is the command that will set up the whitelisting:</p> <pre><code>openstack loadbalancer listener set \\\n--allowed-cidr 10.0.0.0/8 \\\n--allowed-cidr 10.95.255.0/24 \\\n&lt;listener_id for example 12345678-1234-1234-1234-123456789011&gt;\n</code></pre> <p></p>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#state-of-security-before-and-after","title":"State of Security: Before and After\ud83d\udd17","text":"<p>Before implementing IP whitelisting, the load balancer accepts traffic from all sources. After completing the procedure:</p> <ul> <li>Only specified IPs can access the load balancer.</li> <li>Unauthorized access attempts are denied.</li> </ul>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#verification-tools","title":"Verification Tools\ud83d\udd17","text":"<p>Various tools can ensure the protection is installed and active:</p> livez Kubernetes monitoring endpoint. nmap (free): For port scanning and access verification. curl (free): To confirm access control from specific IPs. Wireshark (free): For packet-level analysis."},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#testing-using-curl-and-livez","title":"Testing using curl and livez\ud83d\udd17","text":"<p>Here is how we could test it:</p> <pre><code>curl -k https://&lt;KUBE_API_IP&gt;:6443/livez?verbose\n</code></pre> <p>That command assumes that you have</p> curl installed and operational which you can see through Horizon commands API Access \u2013&gt; View Credentials livez which is a piece of software which will show what happens with the load balancer. <p>This would be a typical response before changes:</p> <pre><code>curl -k https://&lt;KUBE_API_IP&gt;:6443/livez?verbose\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[+]poststarthook/apiservice-discovery-controller ok\nlivez check passed\n</code></pre> <p>And, this would be a typical response after the changes:</p> <pre><code>curl -k https://&lt;KUBE_API_IP&gt;:6443/livez?verbose -m 5\ncurl: (28) Connection timed out after 5000 milliseconds\n</code></pre> <p>Whitelisting prevents traffic from all IP addresses apart from those that are allowed by --allowed-cidr.</p>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#testing-with-nmap","title":"Testing with nmap\ud83d\udd17","text":"<p>To test with nmap:</p> <pre><code>nmap -p &lt;PORT&gt; &lt;LOAD_BALANCER_IP&gt;\n</code></pre>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#testing-with-curl-directly","title":"Testing with curl directly\ud83d\udd17","text":"<p>To test with curl:</p> <pre><code>curl http://&lt;LOAD_BALANCER_IP&gt;\n</code></pre>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Horizon-and-CLI-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You can wrap up this procedure with Terraform and apply to a larger number of load balancers. See Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on 3Engines Cloud</p> <p>Also, compare with Implementing IP Whitelisting for Load Balancers with Security Groups on 3Engines Cloud</p>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-3Engines-Cloud.html.html","title":"Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on 3Engines Cloud\ud83d\udd17","text":"<p>This guide explains how to configure IP whitelisting (allowed_cidrs) on an existing OpenStack Load Balancer using Terraform. The configuration will limit access to your cluster through load balancer.</p>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Get necessary load balancer and cluster data from the Prerequisites</li> <li>Create the Terraform Configuration</li> <li>Import Existing Load Balancer Listener</li> <li>Run terraform</li> <li>Test and verify that protection of load balancer via whitelisting works</li> </ul>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Basic parameters already defined for whitelisting</p> <p>See article Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on 3Engines Cloud for definition of basic notions and parameters.</p> <p>No. 3 Terraform installed</p> <p>You will need version 1.50 or higher to be operational.</p> <p>For complete introduction and installation of Terrafom on OpenStack see article Generating and authorizing Terraform using Keycloak user on 3Engines Cloud</p> <p>No. 4 Unrestricted application credentials</p> <p>You need to have OpenStack application credentials with unrestricted checkbox. Check article How to generate or use Application Credentials via CLI on 3Engines Cloud</p> <p>The first part of that article describes how to have installed OpenStack client and connect it to the cloud. With that provision, the quickest way to create an unrestricted application credential is to apply the command like this:</p> <pre><code>openstack application credential create cred_unrestricted --unrestricted\n</code></pre> <p>That would create an unrestricted credential called cred_unrestricted.</p> <p>You can also use Horizon commands Identity \u2013&gt; Application Credentials \u2013&gt; Create Application Credential and check the appropriate box on:</p> <p></p> <p>Log in to your account using this unrestricted credential.</p>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-3Engines-Cloud.html.html#prepare-your-environment","title":"Prepare Your Environment\ud83d\udd17","text":"<p>Work through article in Prerequisite No. 2 from which we will derive all the input parameters, using Horizon and CLI commands.</p> <p>Also, authenticate through application credential you got from Prerequisite No. 4.</p>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-3Engines-Cloud.html.html#configure-terraform-for-whitelisting","title":"Configure Terraform for whitelisting\ud83d\udd17","text":"<p>Instead of performing the whitelisting procedure manually, we can use Terraform and store the procedure in the remote repo.</p> <p>Create file openstack_auth.sh</p> <pre><code>export OS_AUTH_URL=\"https://your-openstack-url:5000/v3\"\nexport OS_PROJECT_NAME=\"your-project\"\nexport OS_USERNAME=\"your-username\"\nexport OS_PASSWORD=\"your-password\"\nexport OS_REGION_NAME=\"your-region\"\n</code></pre> <p>Create a new directory for your Terraform configuration and create the following files:</p> <p>Note</p> <p>This example is created for brand new Magnum cluster. You might have to adjust it a bit to suit your needs.</p> <p>Create Terraform file:</p> <p>main.tf</p> <pre><code>terraform {\n required_providers {\n openstack = {\n source = \"terraform-provider-openstack/openstack\"\n version = \"1.47.0\"\n }\n }\n}\n\nprovider \"openstack\" {\n use_octavia = true # Required for Load Balancer v2 API\n}\n</code></pre> <p>variables.tf</p> <pre><code>variable \"ID_OF_LOADBALANCER\" {\n type = string\n description = \"ID of the existing OpenStack Load Balancer\"\n}\n\nvariable \"allowed_cidrs\" {\n type = list(string)\n description = \"List of IP ranges in CIDR format to whitelist\"\n}\n</code></pre> <p>terraform.tfvars</p> <pre><code>ID_OF_LOADBALANCER = \"your-lb-id\"\nallowed_cidrs = [\n \"10.0.0.1/32\", # Single IP address\n \"192.168.1.0/24\", # IP range\n \"172.16.0.0/16\" # Larger subnet\n]\n</code></pre> <p>lb.tf</p> <pre><code>resource \"openstack_lb_listener_v2\" \"k8s_api_listener\" {\n loadbalancer_id = var.ID_OF_LOADBALANCER\n allowed_cidrs = var.allowed_cidrs\n protocol_port = \"6443\"\n protocol = \"TCP\"\n}\n</code></pre>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-3Engines-Cloud.html.html#import-existing-load-balancer-listener","title":"Import Existing Load Balancer Listener\ud83d\udd17","text":"<p>Since Terraform 1.5 can import your resource in declarative way.</p> <p>import.tf</p> <pre><code>import {\n to = openstack_lb_listener_v2.k8s_api_listener\n id = \"your-listener-id\"\n}\n</code></pre> <p>Or you can do it in an imperative way:</p> <pre><code>terraform import openstack_lb_listener_v2.k8s_api_listener \"&lt;your-listener-id&gt;\"\n</code></pre>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-3Engines-Cloud.html.html#run-terraform","title":"Run Terraform\ud83d\udd17","text":"<p>Terraform Execute</p> <pre><code>terraform init\nterraform plan -out=generated_listener.tf\nterraform apply generated_listener.tf\n</code></pre> <p>Example output:</p> <p>teraform output</p> <pre><code>Terraform apply generated_listener.tf\nopenstack_lb_listener_v2.k8s_api_listener: Preparing import... [id=bbf39f1c-6936-4344-9957-7517d4a979b6]\nopenstack_lb_listener_v2.k8s_api_listener: Refreshing state... [id=bbf39f1c-6936-4344-9957-7517d4a979b6]\n\nTerraform used the selected providers to generate the following execution\nplan. Resource actions are indicated with the following symbols:\n ~ update in-place\n\nTerraform will perform the following actions:\n\n # openstack_lb_listener_v2.k8s_api_listener will be updated in-place\n # (imported from \"bbf39f1c-6936-4344-9957-7517d4a979b6\")\n ~ resource \"openstack_lb_listener_v2\" \"k8s_api_listener\" {\n admin_state_up = true\n ~ allowed_cidrs = [\n + \"10.0.0.1/32\",\n ]\n connection_limit = -1\n default_pool_id = \"5991eacc-5869-4205-a646-d27646ccb216\"\n default_tls_container_ref = null\n description = null\n id = \"bbf39f1c-6936-4344-9957-7517d4a979b6\"\n insert_headers = {}\n loadbalancer_id = \"2d6b335f-fb05-4496-8593-887f7e2c49cf\"\n name = \"lb-testing-ih347dstxyl2-api_lb_fixed-w2im3obvdv2p-listener-t36tocd4onxk\"\n protocol = \"TCP\"\n protocol_port = 6443\n region = \"&lt;concealed by 1Password&gt;\"\n sni_container_refs = []\n tenant_id = \"&lt;concealed by 1Password&gt;\"\n timeout_client_data = 50000\n timeout_member_connect = 5000\n timeout_member_data = 50000\n timeout_tcp_inspect = 0\n\n - timeouts {}\n }\n\nPlan: 1 to import, 0 to add, 1 to change, 0 to destroy.\n</code></pre>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-3Engines-Cloud.html.html#tests","title":"Tests\ud83d\udd17","text":"<p>By default, Magnum LB does not have any access restrictions.</p> <p>Before changes:</p> <pre><code>curl -k https://&lt;KUBE_API_IP&gt;:6443/livez?verbose\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[+]poststarthook/apiservice-discovery-controller ok\nlivez check passed\n</code></pre> <p>After:</p> <pre><code>curl -k https://&lt;KUBE_API_IP&gt;:6443/livez?verbose -m 5\ncurl: (28) Connection timed out after 5000 milliseconds\n</code></pre>"},{"location":"kubernetes/Configuring-IP-Whitelisting-for-OpenStack-Load-Balancer-using-Terraform-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Compare with Implementing IP Whitelisting for Load Balancers with Security Groups on 3Engines Cloud</p>"},{"location":"kubernetes/Create-and-access-NFS-server-from-Kubernetes-on-3Engines-Cloud.html.html","title":"Create and access NFS server from Kubernetes on 3Engines Cloud\ud83d\udd17","text":"<p>In order to enable simultaneous read-write storage to multiple pods running on a Kubernetes cluster, we can use an NFS server.</p> <p>In this guide we will create an NFS server on a virtual machine, create file share on this server and demonstrate accessing it from a Kubernetes pod.</p>"},{"location":"kubernetes/Create-and-access-NFS-server-from-Kubernetes-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Set up an NFS server on a VM</li> <li>Set up a share folder on the NFS server</li> <li>Make the share available</li> <li>Deploy a test pod on the cluster</li> </ul>"},{"location":"kubernetes/Create-and-access-NFS-server-from-Kubernetes-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>The resources that you require and use will reflect on the state of your account wallet. Check your account statistics at https://portal.3Engines.com/.</p> <p>No. 2 Familiarity with Linux and cloud management</p> <p>We assume you know the basics of Linux and 3Engines Cloud cloud management:</p> <ul> <li> <p>Creating, accessing and using virtual machines How to create new Linux VM in OpenStack Dashboard Horizon on 3Engines Cloud</p> </li> <li> <p>Creating security groups How to use Security Groups in Horizon on 3Engines Cloud</p> </li> <li> <p>Attaching floating IPs How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud</p> </li> </ul> <p>No. 3 A running Kubernetes cluster</p> <p>You will also need a Kubernetes cluster to try out the commands. To create one from scratch, see How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p> <p>No. 4 kubectl access to the Kubernetes cloud</p> <p>As usual when working with Kubernetes clusters, you will need to use the kubectl command: How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p>"},{"location":"kubernetes/Create-and-access-NFS-server-from-Kubernetes-on-3Engines-Cloud.html.html#1-set-up-nfs-server-on-a-vm","title":"1. Set up NFS server on a VM\ud83d\udd17","text":"<p>As a prerequisite to create an NFS server on a VM, first from the Network tab in Horizon create a security group allowing ingress traffic from port 2049.</p> <p>Then create an Ubuntu VM from Horizon. During the Network selection dialog, connect the VM to the network of your Kubernetes cluster. This ensures that cluster nodes have access to the NFS server over private network. Then add that security group with port 2049 open.</p> <p></p> <p>When the VM is created, you can see that it has private address assigned. For this occasion, let the private address be 10.0.0.118. Take note of this address to later use it in NFS configuration.</p> <p>Set up floating IP on the VM server, just to enable SSH to this VM.</p>"},{"location":"kubernetes/Create-and-access-NFS-server-from-Kubernetes-on-3Engines-Cloud.html.html#2-set-up-a-share-folder-on-the-nfs-server","title":"2. Set up a share folder on the NFS server\ud83d\udd17","text":"<p>SSH to the VM, then run:</p> <pre><code>sudo apt-get update\nsudo apt-get install nfs-kernel-server\n</code></pre> <p>In the NFS server VM create a share folder:</p> <pre><code>sudo mkdir /mnt/myshare\n</code></pre> <p>Change the owner of the share so that nobody is owner. Thus any user on the client can access the share folder. More restrictive settings can be applied.</p> <pre><code>sudo chown nobody:nogroup /mnt/myshare\n</code></pre> <p>Also change the permissions of the folder, so that anyone can modify the files:</p> <pre><code>sudo chmod 777 /mnt/myshare\n</code></pre> <p>Edit the /etc/exports file and add the following line:</p> <pre><code>/mnt/myshare 10.0.0.0/24(rw,sync,no_subtree_check)\n</code></pre> <p>This indicates that all nodes on the cluster network can access this share, with subfolders, in read-write mode.</p>"},{"location":"kubernetes/Create-and-access-NFS-server-from-Kubernetes-on-3Engines-Cloud.html.html#3-make-the-share-available","title":"3. Make the share available\ud83d\udd17","text":"<p>Run the below command to make the share available:</p> <pre><code>sudo exportfs -a\n</code></pre> <p>Then restart the NFS server with:</p> <pre><code>sudo systemctl restart nfs-kernel-server\n</code></pre> <p>Exit from the NFS server VM.</p>"},{"location":"kubernetes/Create-and-access-NFS-server-from-Kubernetes-on-3Engines-Cloud.html.html#4-deploy-a-test-pod-on-the-cluster","title":"4. Deploy a test pod on the cluster\ud83d\udd17","text":"<p>Ensure you can access your cluster with kubectl. Have a file test-pod.yaml with the following contents:</p> <p>test-pod.yaml</p> <pre><code>apiVersion: v1\nkind: Pod\nmetadata:\n name: test-pod\n namespace: default\nspec:\n containers:\n - image: nginx\n name: test-container\n volumeMounts:\n - mountPath: /my-nfs-data\n name: test-volume\n volumes:\n - name: test-volume\n nfs:\n server: 10.0.0.118\n path: /mnt/myshare\n</code></pre> <p>The NFS server block refers to private IP address of the NFS server machine, which is on our cluster network. Apply the yaml manifest with:</p> <pre><code>kubectl apply -f test-pod.yaml\n</code></pre> <p>We can then enter the shell of the test-pod with the below command:</p> <pre><code>kubectl exec -it test-pod -- sh\n</code></pre> <p>and see that the my-nfs-data folder got mounted properly:</p> <p></p> <p>To verify, create a file testfile in this folder, then exit the container. You can then SSH back to the NFS server and verify that testfile is available in /mnt/myshare folder.</p> <p></p>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html","title":"Creating Additional Nodegroups in Kubernetes Cluster on 3Engines Cloud OpenStack Magnum\ud83d\udd17","text":""},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#the-benefits-of-using-nodegroups","title":"The Benefits of Using Nodegroups\ud83d\udd17","text":"<p>A nodegroup is a group of nodes from a Kubernetes cluster that have the same configuration and run the user\u2019s containers. One and the same cluster can have various nodegroups within it, so instead of creating several independent clusters, you may create only one and then separate the groups into the nodegroups.</p> <p>A nodegroup separates the roles within the cluster and can</p> <ul> <li>limit the scope of damage if a given group is compromised,</li> <li>regulate the number of API requests originating from a certain group, and</li> <li>create scopes of privileges to specific node types and related workloads.</li> </ul> <p>Other uses of nodegroup roles also include:</p> <ul> <li>for testing purposes,</li> <li>if your Kubernetes environment is small on resources, you can create a minimal Kubernetes cluster and later on add nodegroups and thus enhance the number of control and worker nodes.</li> <li>Nodes in a group can be created, upgraded and deleted individually, without affecting the rest of the cluster.</li> </ul>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>The structure of command openstack coe nodelist</li> <li>How to produce manageable output from nodelist set of commands</li> <li>How to list what nodegroups are available in a cluster</li> <li>How to show the contents of one particular nodegroup in a cluster</li> <li>How to create a new nodegroup</li> <li>How to delete an existing nodegroup</li> <li>How to update nodegroups</li> <li>How to resize a nodegroup</li> <li>The benefits of using nodegroups in Kubernetes clusters</li> </ul>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Creating clusters with CLI</p> <p>The article How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum will introduce you to creation of clusters using a command line interface.</p> <p>No. 3 Connect openstack client to the cloud</p> <p>Prepare openstack and magnum clients by executing Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud from article How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon</p> <p>No. 4 Check available quotas</p> <p>Before creating additional node groups check the state of the resources with Horizon commands Computer =&gt; Overview. See Dashboard Overview \u2013 Project Quotas And Flavors Limits on 3Engines Cloud.</p>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#nodegroup-subcommands","title":"Nodegroup Subcommands\ud83d\udd17","text":"<p>Once you create a Kubernetes cluster on OpenStack Magnum, there are five nodegroup commands at your disposal:</p> <pre><code>openstack coe nodegroup create\n\nopenstack coe nodegroup delete\n\nopenstack coe nodegroup list\n\nopenstack coe nodegroup show\n\nopenstack coe nodegroup update\n</code></pre> <p>With this, you can repurpose the cluster to include various images, change volume access, set up max and min values for the number of nodes and so on.</p>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-1-access-the-current-state-of-clusters-and-their-nodegroups","title":"Step 1 Access the Current State of Clusters and Their Nodegroups\ud83d\udd17","text":"<p>Here is which clusters are available in the system:</p> <pre><code>openstack coe cluster list --max-width 120\n</code></pre> <p></p> <p>The default process of creating Kubernetes clusters on OpenStack Magnum produces two nodegroups, default-master and default-worker. Use commands</p> <pre><code>openstack coe nodegroup list kubelbtrue\n\nopenstack coe nodegroup list k8s-cluster\n</code></pre> <p>to list default nodegroups for those two clusters, kubelbtrue and k8s-cluster.</p> <p></p> <p>The default-worker node group cannot be removed or reconfigured so plan ahead when creating the base cluster.</p>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-2-how-to-create-a-new-nodegroup","title":"Step 2 How to Create a New Nodegroup\ud83d\udd17","text":"<p>In this step you learn about the parameters available for the nodegroup create command. This is the general structure:</p> <pre><code>openstack coe nodegroup create [-h]\n[--docker-volume-size &lt;docker-volume-size&gt;]\n[--labels &lt;KEY1=VALUE1,KEY2=VALUE2;KEY3=VALUE3...&gt;]\n[--node-count &lt;node-count&gt;]\n[--min-nodes &lt;min-nodes&gt;]\n[--max-nodes &lt;max-nodes&gt;]\n[--role &lt;role&gt;]\n[--image &lt;image&gt;]\n[--flavor &lt;flavor&gt;]\n[--merge-labels]\n&lt;cluster&gt; &lt;name&gt;\n</code></pre> <p>You will now create a nodegroup of two members, it will be called testing, the role will be called test, and add it to the cluster k8s-cluster:</p> <pre><code>openstack coe nodegroup create \\\n--node-count 2 \\\n--role test \\\nk8s-cluster testing\n</code></pre> <p>Then use the command</p> <pre><code>openstack coe nodegroup list k8s-cluster\n</code></pre> <p>to list the nodegroups twice. The first time, it will be in status of creating, the second time, after a few seconds, it will have been created already.</p> <p></p> <p>In Horizon, use command Orchestration =&gt; Stacks to list the mechanisms that create new instances. In this case, the stack looks like this:</p> <p></p> <p>Still in Horizon, click on commands Contaner Infra =&gt; Clusters =&gt; k8s-clusters and see that there are now five nodes in total:</p> <p></p>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-3-using-role-to-filter-nodegroups-in-the-cluster","title":"Step 3 Using role to Filter Nodegroups in the Cluster\ud83d\udd17","text":"<p>It is possible to filter node groups according to the role. Here is the command to show only the test nodegroup:</p> <pre><code>openstack coe nodegroup list k8s-cluster --role test\n</code></pre> <p></p> <p>Several node groups can share the same role name.</p> <p>The roles can be used to schedule the nodes when using the kubectl command directly on the cluster.</p>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-4-show-details-of-the-nodegroup-created","title":"Step 4 Show Details of the Nodegroup Created\ud83d\udd17","text":"<p>Command show presents the details of a nodegroup in various formats \u2013 json, table, shell, value or yaml. The default is table but use parameter \u2013max-width to limit the number of columns in it:</p> <pre><code>openstack coe nodegroup show --max-width 80 k8s-cluster testing\n</code></pre> <p></p>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-5-delete-the-existing-nodegroup","title":"Step 5 Delete the Existing Nodegroup\ud83d\udd17","text":"<p>In this step you shall try to create a nodegroup with small footprint:</p> <pre><code>openstack coe nodegroup create \\\n--node-count 2 \\\n--role test \\\n--image cirros-0.4.0-x86_64-2 \\\n--flavor eo1.xsmall \\\nk8s-cluster cirros\n</code></pre> <p>After one hour, the command was cancelled and the creation has failed. The resources will, however, stay frozen in the system so here is how to delete them.</p> <p>One way is to use the CLI delete subcommand, like this:</p> <pre><code>openstack coe nodegroup delete k8s-cluster cirros\n</code></pre> <p>The status will be changed to DELETE_IN_PROGRESS.</p> <p>Another way is to find the instances of those created nodes and delete them through the Horizon interface. Find the existing instances with commands Compute =&gt; Instance and filter by Instance Name, with text k8s-cluster-cirros-. It may look like this:</p> <p></p> <p>and then delete them by clicking on red button Delete Instances.</p> <p>You will get a confirmation text in cloud in the upper right corner.</p> <p>Regardless of the way, the instances will not be deleted immediately, but rather scheduled to be deleted in some near future.</p> <p>The default master and worker node groups cannot be deleted but all the others can.</p>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-6-update-the-existing-nodegroup","title":"Step 6 Update the Existing Nodegroup\ud83d\udd17","text":"<p>In this step you will directly update the existing nodegroup, rather than adding and deleting them in a row. The example command is:</p> <pre><code>openstack coe nodegroup update k8s-cluster testing replace min_node_count=1\n</code></pre> <p>Instead of replace, it is also possible to use verbs add and delete.</p> <p>In the above example, you are setting up the minimum value of nodes to 1. (Previously it was 0 as parameter min_node_count was not specified and its default value is 0.)</p>"},{"location":"kubernetes/Creating-Additional-Nodegroups-in-Kubernetes-Cluster-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-7-resize-the-nodegroup","title":"Step 7 Resize the Nodegroup\ud83d\udd17","text":"<p>Resizing the nodegroup is similar to resizing the cluster, with the addition of parameter \u2013nodegroup. Currently, the number of nodes in group testing is 2. Make it 1:</p> <pre><code>openstack coe cluster resize k8s-cluster --nodegroup testing 1\n</code></pre> <p>To see the result, apply the command</p> <pre><code>openstack coe nodegroup list --max-width 120 k8s-cluster\n</code></pre> <p>and get:</p> <p></p> <p>Cluster cannot be scaled outside of min-nodes/max-nodes set when nodegroup was created.</p> <p>Here is what the state of the networks looks like after all these changes (commands Network =&gt; Network Topology =&gt; Small in Horizon interface):</p> <p></p>"},{"location":"kubernetes/Default-Kubernetes-cluster-templates-in-3Engines-Cloud-Cloud.html.html","title":"Default Kubernetes cluster templates in 3Engines Cloud Cloud\ud83d\udd17","text":"<p>In this article we shall list Kubernetes cluster templates available on 3Engines Cloud and explain the differences among them.</p>"},{"location":"kubernetes/Default-Kubernetes-cluster-templates-in-3Engines-Cloud-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>List available templates on your cloud</li> <li>Explain the difference between calico and cilium network drivers</li> <li>How to choose proper template</li> <li>Overview and benefits of localstorage templates</li> <li>Example of creating localstorage template using HMD and HMAD flavors</li> </ul>"},{"location":"kubernetes/Default-Kubernetes-cluster-templates-in-3Engines-Cloud-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Private and public keys</p> <p>To create a cluster, you will need an available SSH key pair. If you do not have one already, follow this article to create it in the OpenStack dashboard: How to create key pair in OpenStack Dashboard on 3Engines Cloud.</p> <p>No. 3 Documentation for standard templates</p> <p>Documentation for all 1.23.16 drivers is here.</p> <p>Documentation for localstorage templates:</p> k8s-stable-localstorage-1.21.5 Kubernetes release 1.21 k8s-stable-localstorage-1.22.5 Kubernetes release 1.22 k8s-stable-localstorage-1.23.5 Kubernetes release 1.23 <p>No. 4 How to create Kubernetes clusters</p> <p>The general procedure is explained in How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum.</p> <p>No. 5 Using vGPU in Kubernetes clusters</p> <p>If template name contains \u201cvgpu\u201d, this template can be used to create so-called \u201cvGPU-first\u201d clusters.</p> <p>To learn how to set up vGPU in Kubernetes clusters on 3Engines Cloud cloud, see Deploying vGPU workloads on 3Engines Cloud Kubernetes.</p>"},{"location":"kubernetes/Default-Kubernetes-cluster-templates-in-3Engines-Cloud-Cloud.html.html#templates-available-on-your-cloud","title":"Templates available on your cloud\ud83d\udd17","text":"<p>The exact number of available default Kubernetes cluster templates depends on the cloud you choose to work with.</p> WAW4-1 <p>These are the default Kubernetes cluster templates on WAW4-1 cloud:</p> <p></p> WAW3-1 <p>These are the default Kubernetes cluster templates on WAW3-1 cloud:</p> <p></p> WAW3-2 <p>Default templates for WAW3-2 cloud:</p> <p></p> FRA1-2 <p>Default templates for FRA1-2 cloud:</p> <p></p> <p>The converse is also true, you may want to select the cloud that you want to use according to the type of cluster that you would want to use. For instance, you would have to select WAW3-1 cloud if you wanted to use vGPU on your cluster.</p>"},{"location":"kubernetes/Default-Kubernetes-cluster-templates-in-3Engines-Cloud-Cloud.html.html#how-to-choose-a-proper-template","title":"How to choose a proper template\ud83d\udd17","text":"<p>Standard templates</p> <p>Standard templates are general in nature and you can use them for any type of Kubernetes cluster. Each will produce a working Kubernetes cluster on 3Engines Cloud OpenStack Magnum hosting. The default network driver is calico. Template that does not specify calico, k8s-1.23.16-v1.0.3, and is identical to the template that does specify calico in its name. Both are placed in the left column in the following table:</p> calico cilium k8s-1.23.16-v1.0.3 k8s-1.23.16-cilium-v1.0.3 k8s-1.23.16-calico-v1.0.3 <p>Standard templates can also use vGPU hardware if available in the cloud. Using vGPU with Kubernetes clusters is explained in Prerequisite No. 5.</p> <p>Templates with vGPU</p> calico vGPU cilium vGPU k8s-1.23.16-vgpu-v1.0.0 k8s-1.23.16-cilium-vgpu-v1.0.0 k8s-1.23.16-calico-vgpu-v1.0.0 <p>Again, the templates in the left column are identical.</p> <p>If the application does not require a great many operations, then a standard template should be sufficient.</p> <p>You can also dig deeper and choose the template according to the the network plugin used.</p>"},{"location":"kubernetes/Default-Kubernetes-cluster-templates-in-3Engines-Cloud-Cloud.html.html#network-plugins-for-kubernetes-clusters","title":"Network plugins for Kubernetes clusters\ud83d\udd17","text":"<p>Kubernetes cluster templates at 3Engines Cloud cloud use calico or cilium plugins for controlling network traffic. Both are CNI compliant. Calico is the default plugin, meaning that if the template name does not specify the plugin, the calico driver is used. If the template name specifies cilium then, of course, the cilium driver is used.</p>"},{"location":"kubernetes/Default-Kubernetes-cluster-templates-in-3Engines-Cloud-Cloud.html.html#calico-the-default","title":"Calico (the default)\ud83d\udd17","text":"<p>Calico uses BGP protocol to move network packets towards IP addresses of the pods. Calico can be faster then its competitors but its most remarkable feature is support for network policies. With those, you can define which pods can send and receive traffic and also manage the security of the network.</p> <p>Calico can apply policies to multiple types of endpoints such as pods, virtual machines and host interfaces. It also supports cryptographics identity. Calico policies can be used on its own or together with the Kubernetes network policies.</p>"},{"location":"kubernetes/Default-Kubernetes-cluster-templates-in-3Engines-Cloud-Cloud.html.html#cilium","title":"Cilium\ud83d\udd17","text":"<p>Cilium is drawing its power from a technology called eBPF. It exposes programmable hooks to the network stack in Linux kernel. eBPF uses those hooks to reprogram Linux runtime behaviour without any loss of speed or safety. There also is no need to recompile Linux kernel in order to become aware of events in Kubernetes clusters. In essence, eBPF enables Linux to watch over Kubernetes and react appropriately.</p> <p>With Cilium, the relationships amongst various cluster parts are as follows:</p> <ul> <li>pods in the cluster (as well as the Cilium driver itself) are using eBPF instead of using Linux kernel directly,</li> <li>kubelet uses Cilium driver through the CNI compliance and</li> <li>the Cilium driver implements network policy, services and load balancing, flow and policy logging, as well as computing various metrics.</li> </ul> <p>Using Cilium especially makes sense if you require fine-grained security controls or need to reduce latency in large Kubernetes clusters.</p>"},{"location":"kubernetes/Default-Kubernetes-cluster-templates-in-3Engines-Cloud-Cloud.html.html#overview-and-benefits-of-localstorage-templates","title":"Overview and benefits of localstorage templates\ud83d\udd17","text":"<p>Compared to standard templates, the localstorage templates may be a better fit for resources intensive apps.</p> <p>NVMe stands for Nonvolatile Memory Express and is a newer storage access and transport protocol for flash and solid-state drives (SSDs). localstorage templates provision the cluster with Virtual Machine flavors which have NVMe storage available.</p> <p>Each cluster contains an instance of etcd volume, which serves as its external database. Using NVMe storage will speed up access to etcd and it will, by definition, speed up cluster operations.</p> <p>Applications such as day trading, personal finances, AI and the similar, may have so many transactions that using localstorage templates may become a viable option.</p> <p>In WAW3-1 cloud, virtual machine flavors with NVMe have the prefix of HMD and they are resource-intensive:</p> <pre><code>openstack flavor list\n+--------------+--------+------+-----------+-------+\n| Name | RAM | Disk | Ephemeral | VCPUs |\n+--------------+--------+------+-----------+-------+\n| hmd.xlarge | 65536 | 200 | 0 | 8 |\n| hmd.medium | 16384 | 50 | 0 | 2 |\n| hmd.large | 32768 | 100 | 0 | 4 |\n</code></pre> <p>You would use an HMD flavor mainly for the master node(s) in the cluster.</p> <p>In WAW3-2 cloud, you would use flavors starting with HMAD instead of HMD.</p>"},{"location":"kubernetes/Default-Kubernetes-cluster-templates-in-3Engines-Cloud-Cloud.html.html#example-parameters-to-create-a-new-cluster-with-localstorage-and-nvme","title":"Example parameters to create a new cluster with localstorage and NVMe\ud83d\udd17","text":"<p>For general discussion of parameters, see Prerequisite No. 4. What follows is a simplified example, geared to creation of cluster using localstorage.</p> <p>We shall use WAW3-1 with HMD flavors in the example but you can, of course, supply HMAD flavors for WAW3-2 and so on.</p> <p>The only deviation from the usual procedure is that it is mandatory to add label etcd_volume_size=0 in the Advanced window. Without it, localstorage template won\u2019t work.</p> <p>Start creating a cluster with the usual chain of commands Container Infra -&gt; Clusters -&gt; + Create New Cluster.</p> <p>In the screenshot below, we selected k8s-stable-localstorage-1.23.5 as our local storage template of choice, in mandatory field Cluster Template.</p> <p>For field Keypair use SSH key that you already have and if you do not have it yet, use Prerequisite No. 2 to obtain it.</p> <p></p> <p>Let master nodes use one of the HMD flavors:</p> <p></p> <p>Proceed to enter the usual parameters into the Network and Management windows.</p> <p>The last window, Advanced, is the place to add label etcd_volume_size=0.</p> <p></p> <p>The result will be a formed cluster NVMe:</p> <p></p>"},{"location":"kubernetes/Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-3Engines-Cloud.html.html","title":"Deploy Keycloak on Kubernetes with a sample app on 3Engines Cloud\ud83d\udd17","text":"<p>Keycloak is a large Open-Source Identity Management suite capable of handling a wide range of identity-related use cases.</p> <p>Using Keycloak, it is straightforward to deploy a robust authentication/authorization solution for your applications. After the initial deployment, you can easily configure it to meet new identity-related requirements, e.g. multi-factor authentication, federation to social-providers, custom password policies, and many others.</p>"},{"location":"kubernetes/Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-3Engines-Cloud.html.html#what-we-are-going-to-do","title":"What We Are Going To Do\ud83d\udd17","text":"<ul> <li>Deploy Keycloak on a Kubernetes cluster</li> <li>Configure Keycloak: create a realm, client and a user</li> <li>Deploy a sample Python web application using Keycloak for authentication</li> </ul>"},{"location":"kubernetes/Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 A running Kubernetes cluster and kubectl activated</p> <p>A Kubernetes cluster, to create one refer to: How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum. To activate kubectl, see How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum.</p> <p>No. 3 Basic knowledge of Python and pip package management</p> <p>Basic knowledge of Python and pip package management is expected. Python 3 and pip should be already installed and available on your local machine.</p> <p>No. 4 Familiarity with OpenID Connect (OIDC) terminology</p> <p>Certain familiarity with OpenID Connect (OIDC) terminology is required. Some key terms will be briefly explained in this article.</p>"},{"location":"kubernetes/Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-3Engines-Cloud.html.html#step-1-deploy-keycloak-on-kubernetes","title":"Step 1 Deploy Keycloak on Kubernetes\ud83d\udd17","text":"<p>Let\u2019s first create a dedicated Kubernetes namespace for Keycloak. This is optional, but good practice:</p> <pre><code>kubectl create namespace keycloak\n</code></pre> <p>Then deploy Keycloak into this namespace:</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/keycloak/keycloak-quickstarts/latest/kubernetes-examples/keycloak.yaml -n keycloak\n</code></pre> <p>Keycloak, by default, gets exposed as a Kubernetes service of a type LoadBalancer, on port 8080. You need to find out the service public IP with the following command (note it might take a couple minutes to populate):</p> <pre><code>kubectl get services -n keycloak\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkeycloak LoadBalancer 10.254.8.94 64.225.128.216 8080:31228/TCP 23h\n</code></pre> <p>Note</p> <p>In our case, the external IP address is 64.225.128.216 so that is what we are going to use in this article. Be sure to replace it with the IP address of your own.</p> <p>So, enter http://64.225.128.216:8080/ to browser to access Keycloak:</p> <p></p> <p>Next, click on Administration Console and you will get redirected to the login screen, where you can sign-in as an admin (login/password admin/admin )</p> <p></p> <p>This is full screen view of the Keycloak window:</p> <p></p>"},{"location":"kubernetes/Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-3Engines-Cloud.html.html#step-2-create-keycloak-realm","title":"Step 2 Create Keycloak realm\ud83d\udd17","text":"<p>In Keycloak terminology, a realm is a dedicated space for managing an isolated subset of users, roles and other related entities. Keycloak has initially a master realm used for administration of Keycloak itself.</p> <p>Our next step is to create our own realm and start operating within its context. To create it, click first on master field in the upper left corner, then click on Create Realm.</p> <p></p> <p>We will just enter the realm name myrealm, leaving the rest unchanged:</p> <p></p> <p>When the realm is created (and selected), we operate within this realm:</p> <p></p> <p>In the left upper corner, instead of master now is the name of the selected realm, myrealm.</p>"},{"location":"kubernetes/Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-3Engines-Cloud.html.html#step-3-create-and-configure-keycloak-client","title":"Step 3 Create and configure Keycloak client\ud83d\udd17","text":"<p>Clients are entities in Keycloak that can request Keycloak to authenticate users. In practical terms, they can be thought of as representation of individual applications that want to utilize Keycloak-managed authentication/authorization.</p> <p>Within the myrealm realm, we will now create a client myapp that will represent the web application which we will create in one of the further steps. To create one such client, click on the Clients panel on the left menu, and then on Create Client button.</p> <p>You will enter a wizard consisting of 3 steps. In the first step we just enter the ID of the client (which, in our case, is myapp), leaving other settings unchanged:</p> <p></p> <p>The next screen involves selecting some crucial settings relating to the authentication/authorization requirements of your specific application.</p> <p></p> <p>The options you choose will depend on your particular scenario:</p> Scenario 1 Traditional server applications For the purpose of this article and our demo app, we use a traditional client server application. We then need to turn on the \u201cClient Authentication\u201d toggle. Scenario 2 SPA For single page applications, you can stay with the default where \u201cClient Authentication\u201d toggle is off. <p>For our demo app, we will require authentication via secret, so be sure to activate option Client Authentication. Once it is turned on, we will be able to the obtain the value of secret later on, in Step 5.</p> <p>The last step of the Wizard involves setting some key coordinates of our client application. The ones we modify involve:</p> <p></p> root URL In our case, we want to deploy the app locally so we set up the root as http://localhost . You will need to change this if your app will be exposed as a public service. Valid redirect URIs This setting represents a route in our app, to which a user will be redirected after a successful login from Keycloak. In our case, we leave this setting very permissive with a \u201c*\u201d, allowing redirect to any path in our application. For production, you should make this more explicit, using a dedicated route, say, /callback, for this purpose. Web origins This setting specifies hosts that can send requests to Keycloak. Requests from other hosts will not pass the cross-origin check and will be rejected. Also here we are very permissive by setting a \u201c*\u201d. Similarly as above, strongly consider changing this setting for production, and limit to trusted sources only. <p>After hitting Save, your client is created. You can then modify the previously selected settings of the created client, and add new, more specific ones. There are vast possibilities for further customization depending on your app specifics, this is however beyond the scope of this article.</p>"},{"location":"kubernetes/Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-3Engines-Cloud.html.html#step-4-create-a-user-in-keycloak","title":"Step 4 Create a User in Keycloak\ud83d\udd17","text":"<p>After creating the Client, we will proceed to creating our first User in Keycloak. In order to do so, click on the Users tab on the left and then Create New User:</p> <p>We will again be very selective and only choose test as the username, leaving other options intact:</p> <p></p> <p>Next, we will set up password credentials for the newly created user. Select Credentials tab and then Set password, type in the password with confirmation in the form and hit Save:</p> <p></p>"},{"location":"kubernetes/Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-3Engines-Cloud.html.html#step-5-retrieve-client-secret-from-keycloak","title":"Step 5 Retrieve client secret from Keycloak\ud83d\udd17","text":"<p>Once we have the Keycloak set up, we will need to extract the client secret, so that Keycloak establishes trust with our application.</p> <p>The client_secret can be extracted by going into myrealm realm, selecting myapp as the client and then taking the client secret with the following chain of commands:</p> <p>Clients \u2013&gt; Client detail \u2013&gt; Credentials</p> <p>Once in tab Credentials, the secret will become accessible through field Client secret:</p> <p></p> <p>For privacy reasons, in the screeshot above, it is painted yellow. In your case, take note of its value, as in the next step you will need to paste it into the application code.</p>"},{"location":"kubernetes/Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-3Engines-Cloud.html.html#step-6-create-a-flask-web-app-utilizing-keycloak-authentication","title":"Step 6 Create a Flask web app utilizing Keycloak authentication\ud83d\udd17","text":"<p>To build the app, we will use Flask, which is a lightweight Python-based web framework. Keycloak supports wide range of other technologies as well. We will use Flask-OIDC library, which expands Flask with capability to run OpenID Connect authentication/authorization scenarios.</p> <p>As a prerequisite, you need to install the following pip packages to cover the dependency chain. Best run the commands from an already preinstalled Python virtual environment:</p> <pre><code>pip install Werkzeug==2.3.8\npip install Flask==2.0.1\npip install wheel==0.40.0\npip install flask-oidc==1.4.0\npip install itsdangerous==2.0.1\n</code></pre> <p>Then you will need to create 2 files: app.py and keycloak.json. You will need the following changes in these files:</p> Replace the IP address In keycloak.json, replace 64.225.128.216 with your own external IP from Step 1. Replace client_secret Again in keycloak.json, replace value of variable client_secret with the secret from Step 5. Replace client_secret In file app.py, replace value of SECRET_KEY with the same secret from Step 5. <p>Create a new file called app.py and paste in the following contents:</p> <pre><code>from flask import Flask, g\nfrom flask_oidc import OpenIDConnect\nimport json\n\napp = Flask(__name__)\n\napp.config.update(\n SECRET_KEY='XXXXXX',\n OIDC_CLIENT_SECRETS='keycloak.json',\n OIDC_INTROSPECTION_AUTH_METHOD='client_secret_post',\n OIDC_TOKEN_TYPE_HINT='access_token',\n OIDC_SCOPES=['openid','email','profile'],\n OIDC_OPENID_REALM='myrealm'\n )\n\noidc = OpenIDConnect(app)\n\n@app.route('/')\ndef index():\n if oidc.user_loggedin:\n info = oidc.user_getinfo([\"preferred_username\", \"email\", \"sub\"])\n return 'Welcome %s' % info.get(\"preferred_username\")\n else:\n return '&lt;h1&gt;Not logged in&lt;/h1&gt;'\n\n@app.route('/login')\n@oidc.require_login\ndef login():\n token = oidc.get_access_token()\n info = oidc.user_getinfo([\"preferred_username\", \"email\", \"sub\"])\n username = info.get(\"preferred_username\")\n return \"Token: \" + token + \"&lt;br/&gt;&lt;br/&gt; Username: \" + username\n\n@app.route('/logout')\ndef logout():\n oidc.logout()\n return '&lt;h2&gt;Hi, you have been logged out! &lt;a href=\"/\"&gt;Return&lt;/a&gt;&lt;/h2&gt;'\n</code></pre> <p>The application code bootstraps the Flask application and provides the configurations necessary for flask_oidc. We need to configure the</p> <ul> <li>name of our realm, the</li> <li>client secret_key and the</li> <li>additional settings that reflect our specific sample flow.</li> </ul> <p>Also, this configuration points to another configuration file, keycloak.json, which reflects further settings of our Keycloak realm. Specifically, in it you will find the client ID and the secret, as well as the endpoints where Keycloak makes available further information about the realm settings.</p> <p>Create the required file keycloak.json, in the same working folder as the app.py file:</p> <pre><code>{\n\"web\": {\n \"client_id\": \"myapp\",\n \"client_secret\": \"XXXXXX\",\n \"auth_uri\": \"http://64.225.128.216:8080/realms/myrealm/protocol/openid-connect/auth\",\n \"token_uri\": \"http://64.225.128.216:8080/realms/myrealm/protocol/openid-connect/token\",\n \"issuer\": \"http://64.225.128.216:8080/realms/myrealm\",\n \"userinfo_uri\": \"http://64.225.128.216:8080/realms/myrealm/protocol/openid-connect/userinfo\",\n \"token_introspection_uri\": \"http://64.225.128.216:8080/realms/myrealm/protocol/openid-connect/token/introspect\",\n \"redirect_uris\": [\n \"http://localhost:5000/*\"\n ]\n }\n}\n</code></pre> <p>Note that app.py creates 3 routes:</p> / In this route, a page is served that provides the name of a logged in user. Alternatively, if the user is not logged in yet, it prompts to do so. <code>/login</code> This route redirects the user to the Keycloak login page and upon successful authentication provides user name and token <code>/logout</code> Entering this route logs the user out."},{"location":"kubernetes/Deploy-Keycloak-on-Kubernetes-with-a-sample-app-on-3Engines-Cloud.html.html#step-7-test-the-application","title":"Step 7 Test the application\ud83d\udd17","text":"<p>To test the application, execute the following command from the working directory in which file app.py is placed:</p> <pre><code>flask run\n</code></pre> <p>This is the result, in a CLI window:</p> <p></p> <p>We now know that the localhost is running flask server on port 5000. Enter localhost:5000 into the browser address bar and it will display the site served on the base route: / . We have not logged in our user yet, hence the respective message:</p> <p></p> <p>The next step is to enter the /login route. Enter localhost:5000/login into the browser address bar. Doing so, redirects to Keycloak prompting to log in to myapp:</p> <p></p> <p>To authenticate, enter the username of the user we created in step 3 (username: test), and the password you used to create this user. With default settings, you might be asked to change the password after first login, then just proceed accordingly. After logging in, our username and token get displayed (for security reasons, parts of the token are painted in yellow):</p> <p></p> <p>The last route to test is /logout . When entering localhost:5000/logout to the browser, we can see the screen below. Entering this route calls the flask-oidc method that logs the user out, also clearing the session cookie under the hood.</p> <p></p>"},{"location":"kubernetes/Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-3Engines-Cloud-Cloud.html.html","title":"Deploying HTTPS Services on Magnum Kubernetes in 3Engines Cloud Cloud\ud83d\udd17","text":"<p>Kubernetes makes it very quick to deploy and publicly expose an application, for example using the LoadBalancer service type. Sample deployments, which demonstrate such capability, are usually served with HTTP. Deploying a production-ready service, secured with HTTPS, can also be done smoothly, by using additional tools.</p> <p>In this article, we show how to deploy a sample HTTPS-protected service on 3Engines Cloud cloud.</p>"},{"location":"kubernetes/Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-3Engines-Cloud-Cloud.html.html#what-we-are-going-to-cover","title":"What We are Going to Cover\ud83d\udd17","text":"<ul> <li>Install Cert Manager\u2019s Custom Resource Definitions</li> <li>Install Cert Manager Helm chart</li> <li>Create a Deployment and a Service</li> <li>Create and Deploy an Issuer</li> <li>Associate the domain with NGINX Ingress</li> <li>Create and Deploy an Ingress Resource</li> </ul>"},{"location":"kubernetes/Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-3Engines-Cloud-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Kubernetes cluster deployed on cloud, with NGINX Ingress enabled</p> <p>See this article How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p> <p>No. 3 Familiarity with kubectl</p> <p>For further instructions refer to How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p> <p>No. 4 Familiarity with Kubernetes Ingress feature</p> <p>It is explained in article Using Kubernetes Ingress on 3Engines Cloud OpenStack Magnum</p> <p>No. 5 Familiarity with deploying Helm charts</p> <p>See this article:</p> <p>Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</p> <p>No. 6 Must have domain purchased from a registrar</p> <p>You also must own a domain purchased from any registrar (domain reseller). Obtaining a domain from registrars is not covered in this article.</p> <p>No. 7 Use DNS command Horizon to connect to the domain name</p> <p>This is optional. Here is the article with detailed information:</p> <p>DNS as a Service on 3Engines Cloud Hosting</p>"},{"location":"kubernetes/Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-3Engines-Cloud-Cloud.html.html#step-1-install-cert-managers-custom-resource-definitions-crds","title":"Step 1 Install Cert Manager\u2019s Custom Resource Definitions (CRDs)\ud83d\udd17","text":"<p>We assume you have your</p> <ul> <li>Magnum cluster up and running and</li> <li>kubectl pointing to your cluster config file.</li> </ul> <p>As a pre-check, you can list the nodes on your cluster:</p> <pre><code># export KUBECONFIG=&lt;your-kubeconfig-file-location&gt;\nkubectl get nodes\n</code></pre> <p>CertManager Helm chart utilizes a few of Custom Resource Definitions (CRDs) which we will need to deploy on our cluster. Aside from multiple default Kubernetes-available resources (e.g., Pods, Deployments or Services), CRDs enable to deploy custom resources defined by third party developers to satisfy further customized use cases. Let\u2019s add CRDs to our cluster with the following command:</p> <pre><code>kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.2/cert-manager.crds.yaml\n</code></pre> <p>The list of the resources will be displayed after running the command. If we want to later refer to them we can also use the following kubectl command:</p> <pre><code>kubectl get crd -l app.kubernetes.io/name=cert-manager\n...\nNAME CREATED AT\ncertificaterequests.cert-manager.io 2022-12-18T11:15:08Z\ncertificates.cert-manager.io 2022-12-18T11:15:08Z\nchallenges.acme.cert-manager.io 2022-12-18T11:15:08Z\nclusterissuers.cert-manager.io 2022-12-18T11:15:08Z\nissuers.cert-manager.io 2022-12-18T11:15:08Z\norders.acme.cert-manager.io 2022-12-18T11:15:08Z\n</code></pre> <p>Warning</p> <p>Magnum introduces a few pod security policies (PSP) which provide some extra safety precautions for the cluster, but will cause conflict with the CertManager Helm chart. PodSecurityPolicy is deprecated until Kubernetes v. 1.25, but still supported in version of Kubernetes 1.21 to 1.23 available on 3Engines Cloud cloud. The commands below may produce warnings about deprecation but the installation should continue nevertheless.</p>"},{"location":"kubernetes/Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-3Engines-Cloud-Cloud.html.html#step-2-install-certmanager-helm-chart","title":"Step 2 Install CertManager Helm chart\ud83d\udd17","text":"<p>We assume you have installed Helm according to the article mentioned in Prerequisite No. 5. The result of that article will be file my-values.yaml and in order to ensure correct deployment of CertManager Helm chart, we will need to</p> <ul> <li>override it and</li> <li>insert the appropriate content into it:</li> </ul> <p>my-values.yaml</p> <pre><code>global:\n podSecurityPolicy:\n enabled: true\n useAppArmor: false\n</code></pre> <p>The following code will both install the CertManager Helm chart into a namespace cert-manager and use my-values.yaml at the same time:</p> <pre><code>helm repo add jetstack https://charts.jetstack.io\nhelm repo update\nhelm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.9.2 --values my-values.yaml\n</code></pre> <p>This is the result:</p> <pre><code>helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.9.2 --values my-values.yaml\nW0208 10:16:08.364635 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+\nW0208 10:16:08.461599 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+\nW0208 10:16:08.502602 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+\nW0208 10:16:11.489377 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+\nW0208 10:16:11.489925 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+\nW0208 10:16:11.524300 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+\nW0208 10:16:13.949045 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+\nW0208 10:16:15.038803 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+\nW0208 10:17:36.084859 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+\nNAME: cert-manager\nLAST DEPLOYED: Wed Feb 8 10:16:07 2023\nNAMESPACE: cert-manager\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\nNOTES:\ncert-manager v1.9.2 has been deployed successfully!\n\nIn order to begin issuing certificates, you will need to set up a ClusterIssuer\nor Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).\n</code></pre> <p>We see that cert-manager is deployed successfully but also get a hint that ClusterIssuer or an Issuer resource has to be installed as well. Our next step is to install a sample service into the cluster and then continue with creation and deployment of an Issuer.</p>"},{"location":"kubernetes/Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-3Engines-Cloud-Cloud.html.html#step-3-create-a-deployment-and-a-service","title":"Step 3 Create a Deployment and a Service\ud83d\udd17","text":"<p>Let\u2019s deploy NGINX service as a standard example of a Kubernetes app. First we create a standard Kubernetes deployment and then a service of type NodePort. Write the following contents to file my-nginx.yaml :</p> <p>my-nginx.yaml</p> <pre><code>apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: my-nginx-deployment\nspec:\n selector:\n matchLabels:\n run: my-nginx\n replicas: 1\n template:\n metadata:\n labels:\n run: my-nginx\n spec:\n containers:\n - name: my-nginx\n image: nginx\n ports:\n - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: my-nginx-service\n labels:\n run: my-nginx\nspec:\n type: NodePort\n ports:\n - port: 80\n protocol: TCP\n selector:\n run: my-nginx\n</code></pre> <p>Deploy with the following command:</p> <pre><code>kubectl apply -f my-nginx.yaml\n</code></pre>"},{"location":"kubernetes/Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-3Engines-Cloud-Cloud.html.html#step-4-create-and-deploy-an-issuer","title":"Step 4 Create and Deploy an Issuer\ud83d\udd17","text":"<p>Now install an Issuer. It is a custom Kubernetes resource and represents Certificate Authority (CA), which ensures that our HTTPS are signed and therefore trusted by the browsers. CertManager supports different issuers, in our example we will use Let\u2019s Encrypt, that uses ACME protocol.</p> <p>Create a new file called my-nginx-issuer.yaml and paste the following content into it. Change the email address XXXXXXXXX@YYYYYYYYY.com to your own and real email address.</p> <p>my-nginx-issuer.yaml</p> <pre><code>apiVersion: cert-manager.io/v1\nkind: Issuer\nmetadata:\n name: my-nginx-issuer\nspec:\n acme:\n email: [email\u00a0protected]\n server: https://acme-v02.api.letsencrypt.org/directory # production\n privateKeySecretRef:\n name: letsencrypt-secret # different secret name than for ingress\n solvers:\n # HTTP-01 challenge provider, creates additional ingress, refer to CertManager documentation for detailed explanation\n - http01:\n ingress:\n class: nginx\n</code></pre> <p>Then deploy on the cluster:</p> <pre><code>kubectl apply -f my-nginx-issuer.yaml\n</code></pre> <p>As a result, the Issuer gets deployed, and a Secret called letsencrypt-secret with a private key is deployed as well.</p>"},{"location":"kubernetes/Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-3Engines-Cloud-Cloud.html.html#step-5-associate-the-domain-with-nginx-ingress","title":"Step 5 Associate the Domain with NGINX Ingress\ud83d\udd17","text":"<p>To see the site in browser, your HTTPS certificate will need to be associated with a specific domain. To follow along, you should have a real domain already registered at a domain registrar.</p> <p>When you deployed your cluster with NGINX ingress, behind the scenes a LoadBalancer was deployed, with a public IP address exposed. You can obtain this address by looking it up in the Horizon web interface. If your list or floating IPs is longer, it can be easily recognized by name:</p> <p></p> <p>Now, at your domain registrar you need to associate the A record of the domain with the floating IP address of the ingress, where your application will be exposed. The way to achieve this will vary by the specific registrar, so we will not provide detailed instructions here.</p> <p>You can also use the DNS command in Horizon to connect the domain name you have with the cluster. See Prerequisite No. 7 for additional details.</p>"},{"location":"kubernetes/Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-3Engines-Cloud-Cloud.html.html#step-6-create-and-deploy-an-ingress-resource","title":"Step 6 Create and Deploy an Ingress Resource\ud83d\udd17","text":"<p>The final step is to deploy the Ingress resource. This will perform the necessary steps to initiate the certificate signing request with the CA and ultimately provide the HTTPS certificate for your service. In order to proceed, place the contents below into file my-nginx-ingress.yaml. Replace mysampledomain.eu with your domain.</p> <p>my-nginx-ingress.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: my-nginx-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\n # below annotation is for using cert manager's \"ingress shim\", refer to CertManager documentation\n cert-manager.io/issuer: my-nginx-issuer # use the name of the issuer here\nspec:\n ingressClassName: nginx\n tls:\n - hosts:\n - mysampledomain.eu #change to own domain\n secretName: my-nginx-secret\n rules:\n - host: mysampledomain.eu #change to own domain\n http:\n paths:\n - path: /*\n pathType: Prefix\n backend:\n service:\n name: my-nginx-service\n port:\n number: 80\n</code></pre> <p>Then deploy with:</p> <pre><code>kubectl apply -f my-nginx-ingress.yaml\n</code></pre> <p>If all works well, the effort is complete and after a couple of minutes we should see the lock sign in front of our IP address. The service is now HTTPS-secured, and you can verify the details of the certificate by clicking on the lock icon.</p> <p></p>"},{"location":"kubernetes/Deploying-HTTPS-Services-on-Magnum-Kubernetes-in-3Engines-Cloud-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>The article Using Kubernetes Ingress on 3Engines Cloud OpenStack Magnum shows how to create an HTTP based service or a site.</p> <p>If you need additional information on Helm charts: Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud.</p>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html","title":"Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud\ud83d\udd17","text":"<p>Kubernetes is a robust and battle-tested environment for running apps and services, yet it could be time consuming to manually provision all resources required to run a production-ready deployment. This article introduces Helm as a package manager for Kubernetes. With it, you will be able to quickly deploy complex Kubernetes applications, consisting of code, databases, user interfaces and more.</p>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Background - How Helm works</li> <li>Install Helm</li> <li>Add a Helm repository</li> <li>Helm chart repositories</li> <li>Deploy Helm chart on a cluster</li> <li>Customize chart deployment</li> </ul>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Basic understanding of Kubernetes</p> <p>We assume you have basic understanding of Kubernetes, its notions and ways of working. Explaining them is out of scope of this article.</p> <p>No. 3 A cluster created on cloud</p> <p>For trying out Helm installation and deployment in an actual environment, create a cluster on cloud using OpenStack Magnum How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum.</p> <p>No. 4 Active connection to the cloud</p> <p>For Kubernetes, that means a kubectl command line tool installed and kubeconfig pointing to a cluster. Instructions are provided in this article How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum.</p> <p>No. 5 Access to Ubuntu to run code on</p> <p>Code samples in this article assume you are running Ubuntu 20.04 LTS or similar Linux system. You can run them on</p> <ul> <li>Windows with Linux subsystem,</li> <li>genuine desktop Ubuntu operating system or you can also</li> <li>create a virtual machine in the 3Engines Cloud cloud and run the examples from there. These articles will provide technical know-how if you need it:</li> </ul> <p>How to create a Linux VM and access it from Windows desktop on 3Engines Cloud</p> <p>How to create a Linux VM and access it from Linux command line on 3Engines Cloud</p>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html#background-how-helm-works","title":"Background - How Helm works\ud83d\udd17","text":"<p>A usual sequence of deploying an application on Kubernetes entails:</p> <ul> <li>having one or more containerized application images available in an image registry</li> <li>deploying one or more Kubernetes resources, in the form of manifest YAML files, onto a Kubernetes cluster</li> </ul> <p>The Kubernetes resources, directly or indirectly, point to the container images. They can also contain additional information required by these images to run. In a very minimal setup, we would have e.g., an NGINX container image deployed with a deployment Kubernetes resource, and exposed on a network via a service resource. A production-grade Kubernetes deployment of a larger application usually requires a set of several, or more, Kubernetes resources to be deployed on the cluster.</p> <p>For each standard deployment of an application on Kubernetes (e.g. a database, a CMS system, a monitoring application), the boilerplate YAML manifests would mostly be the same and only vary based on the specific values assigned (e.g. ports, endpoints, image registry, version, etc.).</p> <p>Helm, therefore, automates the process of provisioning a Kubernetes deployment. The person in charge of the deployment does not have to write each resource from the scratch or consider the links between the resources. Instead, they download a Helm chart, which provides predefined resource templates. The values for the templates are read from a central configuration file called values.yaml.</p> <p>Helm charts are designed to cover a broad set of use cases required for deploying an application. The application can be then simply launched on a cluster with a few commands within seconds. Some specific customizations for an individual deployment can be then easily adjusted by overriding the default values.yaml file.</p>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html#install-helm","title":"Install Helm\ud83d\udd17","text":"<p>You can install Helm on your own development machine. To install, download the installer file from the Helm release page, change file permission, and run the installation:</p> <pre><code>curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3\nchmod 700 get_helm.sh\n ./get_helm.sh\n</code></pre> <p>You can verify the installation by running:</p> <pre><code>$ helm version\n</code></pre> <p>For other operating systems, use the link to download Helm installation files and proceed analogously.</p>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html#add-a-helm-repository","title":"Add a Helm repository\ud83d\udd17","text":"<p>Helm charts are distributed using repositories. For example, a single repository can host several Helm charts from a certain provider. For the purpose of this article, we will add the Bitnami repository that contains their versions of multiple useful Helm charts e.g. Redis, Grafana, Elasticsearch, or others. You can run it using the following command:</p> <pre><code>helm repo add bitnami https://charts.bitnami.com/bitnami\n</code></pre> <p>Then verify the available charts in this repository by running:</p> <pre><code>helm search repo\n</code></pre> <p>The following image shows just a start of all the available apps from bitnami repository to install with Helm:</p> <p></p>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html#helm-chart-repositories","title":"Helm chart repositories\ud83d\udd17","text":"<p>In the above example, we knew where to find a repository with Helm charts. There are other repositories and they are usually hosted on GitHub or ArtifactHub. Let us have a look at the apache page in ArtifactHUB:</p> <p></p> <p>Click on the DEFAULT VALUES option (yellow highlight) and see contents of the default values.yaml file.</p> <p></p> <p>In this file (or in additional tabular information on the chart page), you can check which parameters are enabled for customization, and which are their default values.</p>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html#check-whether-kubectl-has-access-to-the-cluster","title":"Check whether kubectl has access to the cluster\ud83d\udd17","text":"<p>To proceed further, verify that you have your KUBECONFIG environment variable exported and pointing to a running cluster\u2019s kubeconfig file (see Prerequisite No. 4). If there is need, export this environment variable:</p> <pre><code>export KUBECONFIG = &lt;location-of-your-kubeconfig-file&gt;\n</code></pre> <p>If your kubectl is properly installed, you should be then able to list the nodes on your cluster:</p> <pre><code>kubectl get nodes\n</code></pre> <p>That will serve as the confirmation that you have access to the cluster.</p>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html#deploy-a-helm-chart-on-a-cluster","title":"Deploy a Helm chart on a cluster\ud83d\udd17","text":"<p>Now that we know where to find repositories with hundreds of charts to choose from, let\u2019s deploy one of them to our cluster.</p> <p>We will install an Apache web server Helm chart. In order to install it with a default configuration, we need to run a single command:</p> <pre><code>helm install my-apache bitnami/apache\n</code></pre> <p>Note that my-apache refers to the concrete release, that is, the concrete deployment running on our cluster. We can adjust this name to our liking. Upon running the above command, the chart gets deployed and some insight about our release is provided:</p> <pre><code>NAME: my-apache\nLAST DEPLOYED: Tue Jan 31 10:48:07 2023\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\nNOTES:\nCHART NAME: apache\nCHART VERSION: 9.2.11\nAPP VERSION: 2.4.55\n....\n</code></pre> <p>As a result, several Kubernetes resources get deployed on the cluster. One of them is the Kubernetes service, which by default gets deployed as a LoadBalancer type. This way your Apache deployment gets immediately publicly exposed with a floating IP available in the cell on the default port 80: <pre><code>$ kubectl get services\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n...\nmy-apache LoadBalancer 10.254.147.21 64.225.131.111 80:32654/TCP,443:32725/TCP 5m\n</code></pre> <p>Note that the floating IP generation can take a couple of minutes to appear. After this time, once you enter the floating IP into the browser you shall see the service available from the Internet:</p> <p></p>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html#customizing-the-chart-deployment","title":"Customizing the chart deployment\ud83d\udd17","text":"<p>We just saw how quick it was to deploy a Helm chart with the default settings. Usually, before running the chart in production, you will need to adjust a few settings to meet your requirements.</p> <p>To customize the deployment, a quick and dirty approach would be to provide flags on the Helm command line to adjust specific parameters. The problem is that each command line option will have 10-20 available flags, so this approach may not be the best in the long run.</p> <p>A more universal approach, however, is to customize the values.yaml file. There are two main ways of doing it:</p> Copy the entire values.yaml file Here you only adjust the value of a specific parameter. Create new values.yaml file from scratch It would contain only the adjusted parameters, with their overridden values. <p>In both scenarios, all defaults, apart from the overridden ones, will be preserved.</p> <p>As an example of customizing the chart, let us expose Apache web server on port 8080 instead of the default 80. We will use the second approach and provide a minimal my-values.yaml file for the overrides. The contents of this file will be the following:</p> <p>my-values.yaml</p> <pre><code>service:\n ports:\n http: 8080\n</code></pre> <p>With these customizations, make sure to follow the indentation and follow the YAML structure indicating also the respective parent blocks in the tree.</p> <p>A separate adjustment that we will make is to create a dedicated namespace apache for our Helm release and instruct Helm to use this namespace. Such an adjustment is quite usual, in order to separate the artifacts related to a specific release/application.</p> <p>Apply the mentioned customizations to the my-custom-apache release, using the following command:</p> <pre><code>helm install my-custom-apache bitnami/apache --values my-values.yaml --namespace custom-apache --create-namespace\n</code></pre> <p>Similarly, as in the earlier example, the service gets exposed. This time, to access the service\u2019s floating IP, refer to the newly created custom-apache namespace:</p> <pre><code>kubectl get services -n custom-apache\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nmy-custom-apache LoadBalancer 10.254.230.171 64.225.135.161 8080:31150/TCP,443:30139/TCP 3m51s\n</code></pre> <p>We can see that the application is now exposed to a new port 8080, which can be verified in the browser as well:</p> <p></p>"},{"location":"kubernetes/Deploying-Helm-Charts-on-Magnum-Kubernetes-Clusters-on-3Engines-Cloud-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Deploy other useful services using Helm charts: Argo Workflows, JupyterHub, Vault amongst many others that are available.</p> <p>Remember that a chart deployed with Helm is, in the end, just a set of Kubernetes resources. Usually, there is a hefty amount of configurable settings in the available Open Source charts. Just as well, you can edit other parameters on an already deployed cluster and you can even modify the templates to your specific use case.</p> <p>The following article will show how to use JetStack repo to install CertManager, with which you can deploy HTTPS services on Kubernetes cloud:</p> <p>Deploying HTTPS Services on Magnum Kubernetes in 3Engines Cloud Cloud</p>"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html","title":"Deploying vGPU workloads on 3Engines Cloud Kubernetes\ud83d\udd17","text":"<p>Utilizing GPU (Graphical Processing Units) presents a highly efficient alternative for fast, highly parallel processing of demanding computational tasks such as image processing, machine learning and many others.</p> <p>In cloud environment, virtual GPU units (vGPU) are available with certain Virtual Machine flavors. This guide provides instructions how to attach such VMs with GPU as Kubernetes cluster nodes and utilize vGPU from Kubernetes pods.</p> <p>We will present three alternative ways for adding vGPU capability to your Kubernetes cluster, based on your required scenario. For each, you should be able to verify the vGPU installation and test it by running vGPU workload.</p>"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html#what-are-we-going-to-cover","title":"What Are We Going To Cover\ud83d\udd17","text":"<ul> <li>Scenario No. 1 - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created after June 21st 2023</li> <li>Scenario No. 2 - Add vGPU nodes as nodegroups on non-GPU Kubernetes clusters created before June 21st 2023</li> <li>Scenario No. 3 - Create a new GPU-first Kubernetes cluster with vGPU-enabled default nodegroup</li> <li>Verify the vGPU installation</li> <li>Test vGPU workload</li> <li>Add non-GPU nodegroup to a GPU-first cluster</li> </ul>"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Knowledge of RC files and CLI commands for Magnum</p> <p>You should be familiar with utilizing OpenStack CLI and Magnum CLI. Your RC file should be sourced and pointing to your project in OpenStack. See article</p> <p>How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon.</p> <p>Note</p> <p>If you are using CLI when creating vGPU nodegroups and are being authenticated with application credentials, please ensure the credential is created with setting</p> <p>unrestricted: true</p> <p>No. 3 Cluster and kubectl should be operational</p> <p>To connect to the cluster via kubectl tool, see this article How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum.</p> <p>No. 4 Familiarity with the notion of nodegroups</p> <p>Creating Additional Nodegroups in Kubernetes Cluster on 3Engines Cloud OpenStack Magnum.</p>"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html#vgpu-flavors-per-cloud","title":"vGPU flavors per cloud\ud83d\udd17","text":"<p>Below is the list of GPU flavors in each cloud, applicable for using with Magnum Kubernetes service.</p> WAW3-1 <p>WAW3-1 supports both four GPU flavors and the Kubernetes, through OpenStack Magnum.</p> Name RAM (MB) Disk (GB) VCPUs vm.a6000.1 14336 40 2 vm.a6000.2 28672 80 4 vm.a6000.3 57344 160 8 vm.a6000.4 114688 320 16 WAW3-2 <p>These are the vGPU flavors for WAW3-2 and Kubernetes, through OpenStack Magnum:</p> Name VCPUS RAM Total Disk Public vm.l40s.1 4 14.9 GB 40 GB Yes vm.l40s.8 32 119.22 GB 320 GB Yes gpu.l40sx2 64 238.44 GB 512 GB Yes gpu.l40sx8 254 953.75 GB 1000 GB Yes FRA1-2 <p>FRA1-2 Supports L40S and the Kubernetes, through OpenStack Magnum.</p> Name VCPUS RAM Total Disk Public vm.l40s.2 8 29.8 GB 80 GB Yes vm.l40s.8 32 119.22 GB 320 GB Yes"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html#hardware-comparison-between-rtx-a6000-and-nvidia-l40s","title":"Hardware comparison between RTX A6000 and NVIDIA L40S\ud83d\udd17","text":"<p>The NVIDIA L40S is designed for 24x7 enterprise data center operations and optimized to deploy at scale. As compared to A6000, NVIDIA L40S is better for</p> <ul> <li>parallel processing tasks</li> <li>AI workloads,</li> <li>real-time ray tracing applications and is</li> <li>faster for in memory-intensive tasks.</li> </ul> <p>Table 1 Comparison of NVIDIA RTX A6000 vs NVIDIA L40S\ud83d\udd17</p> Specification NVIDIA RTX A60001 NVIDIA L40S1 Architecture Ampere Ada Lovelace Release Date 2020 2023 CUDA Cores 10,752 18,176 Memory 48 GB GDDR6 (768 GB/s bandwidth) 48 GB GDDR6 (864 GB/s bandwidth) Boost Clock Speed Up to 1,800 MHz Up to 2,520 MHz Tensor Cores 336 (3rd generation) 568 (4th generation) Performance Strong performance for diverse workloads Superior AI and machine learning performance Use Cases 3D rendering, video editing, AI development Data center, large-scale AI, enterprise applications"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html#scenario-1-add-vgpu-nodes-as-a-nodegroup-on-a-non-gpu-kubernetes-clusters-created-after-june-21st-2023","title":"Scenario 1 - Add vGPU nodes as a nodegroup on a non-GPU Kubernetes clusters created after June 21st 2023\ud83d\udd17","text":"<p>In order to create a new nodegroup, called gpu, with one node vGPU flavor, say, vm.a6000.2, we can use the following Magnum CLI command:</p> <pre><code>openstack coe nodegroup create $CLUSTER_ID gpu \\\n--labels \"worker_type=gpu\" \\\n--merge-labels \\\n--role worker \\\n--flavor vm.a6000.2 \\\n--node-count 1\n</code></pre> <p>Adjust the node-count and flavor to your preference, adjust the $CLUSTER_ID to the one of your clusters (this can be taken from Clusters view in Horizon UI), and ensure the role is set as worker.</p> <p>The key setting is adding a label worker_type=gpu:</p> <p>Your request will be accepted:</p> <p></p> <p>Now list the available nodegroups:</p> <pre><code>openstack coe nodegroup list $CLUSTER_ID_RECENT \\\n--max-width 120\n</code></pre> <p>We get:</p> <p></p> <p>The result is that a new nodegroup called gpu is created in the cluster and that it is using the GPU flavor.</p>"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html#scenario-2-add-vgpu-nodes-as-nodegroups-on-non-gpu-kubernetes-clusters-created-before-june-21st-2023","title":"Scenario 2 - Add vGPU nodes as nodegroups on non-GPU Kubernetes clusters created before June 21st 2023\ud83d\udd17","text":"<p>The instructions are the same as in the previous scenario, with the exception of adding an additional label:</p> <pre><code>existing_helm_handler_master_id=$MASTER_0_SERVER_ID\n</code></pre> <p>where $MASTER_0_SERVER_ID is the ID of the master0 VM from your cluster. The uuid value can be obtained</p> <ul> <li>in Horizon, through the Instances view</li> <li>or using a CLI command to isolate the uuid for the master node:</li> </ul> <pre><code>openstack coe nodegroup list $CLUSTER_ID_OLDER \\\n-c uuid \\\n-c name \\\n-c status \\\n-c role\n</code></pre> <p></p> <p>In this example, uuid is 413c7486-caa9-4e12-be3b-3d9410f2d32f. Set up the value for master handler label:</p> <pre><code>export MASTER_0_SERVER_ID=\"413c7486-caa9-4e12-be3b-3d9410f2d32f\"\n</code></pre> <p>and execute the following command to create an additional nodegroup in this scenario:</p> <pre><code>openstack coe nodegroup create $CLUSTER_ID_OLDER gpu \\\n--labels \"worker_type=gpu,existing_helm_handler_master_id=$MASTER_0_SERVER_ID\" \\\n--merge-labels \\\n--role worker \\\n--flavor vm.a6000.2 \\\n--node-count 1\n</code></pre> <p>There may not be any space between the labels.</p> <p>The request will be accepted and after a while, a new nodegroup will be available and based on GPU flavor. List the nodegroups with the command:</p> <pre><code>openstack coe nodegroup list $CLUSTER_ID_OLDER --max-width 120\n</code></pre>"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html#scenario-3-create-a-new-gpu-first-kubernetes-cluster-with-vgpu-enabled-default-nodegroup","title":"Scenario 3 - Create a new GPU-first Kubernetes cluster with vGPU-enabled default nodegroup\ud83d\udd17","text":"<p>To create a new vGPU-enabled cluster, you can use the usual Horizon commands, selecting one of the existing templates with vgu in their names:</p> <p></p> <p>In the below example, we use the CLI to create a cluster called k8s-gpu-with_template with k8s-1.23.16-vgpu-v1.0.0 template. The sample cluster has</p> <ul> <li>one master node with flavor eo1.medium and</li> <li>one worker node with vm.a6000.2 flavor with vGPU enabled.</li> </ul> <p>To adjust these parameters to your requirements, you will need to replace the $KEYPAIR to your own. Also, to verify that the nvidia labels are correctly installed, first create a namespace called nvidia-device-plugin. You can then list the namespaces to be sure that it was created properly. So, the preparation commands look like this:</p> <pre><code>export KEYPAIR=\"sshkey\"\nkubectl create namespace nvidia-device-plugin\nkubectl get namespaces\n</code></pre> <p>The final command to create the required cluster is:</p> <pre><code>openstack coe cluster create k8s-gpu-with_template \\\n--cluster-template \"k8s-1.23.16-vgpu-v1.0.0\" \\\n--keypair=$KEYPAIR \\\n--master-count 1 \\\n--node-count 1\n</code></pre>"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html#verify-the-vgpu-installation","title":"Verify the vGPU installation\ud83d\udd17","text":"<p>You can verify that vGPU-enabled nodes were properly added to your cluster, by checking the nvidia-device-plugin deployed in the cluster, to the nvidia-device-plugin namespace. The command to list the contents of the nvidia namespace is:</p> <pre><code>kubectl get daemonset nvidia-device-plugin \\\n-n nvidia-device-plugin\n</code></pre> <p></p> <p>See which nodes are now present:</p> <pre><code>kubectl get node\n</code></pre> <p></p> <p>Each GPU node, should have several nvidia labels added. To verify, you can run one of the below commands, the second of which will show the labels formatted:</p> <pre><code>kubectl get node k8s-gpu-cluster-XXXX --show-labels\nkubectl get node k8s-gpu-cluster-XXXX \\\n-o go-template='{{range $key, $value := .metadata.labels}}{{$key}}: {{$value}}{{\"\\n\"}}{{end}}'\n</code></pre> <p>Concretely, in our case, the second command is:</p> <pre><code>kubectl get node k8s-gpu-with-template-lfs5335ymxcn-node-0 \\\n-o go-template='{{range $key, $value := .metadata.labels}}{{$key}}: {{$value}}{{\"\\n\"}}{{end}}'\n</code></pre> <p>and the result will look like this:</p> <p></p> <p>Also, GPU workers are tainted by default with the taint:</p> <pre><code>node.3Engines.com/type=gpu:NoSchedule\n</code></pre> <p>This can be verified by running the following command, in which we are using the name of the existing node:</p> <pre><code>kubectl describe node k8s-gpu-with-template-lfs5335ymxcn-node-0 | grep 'Taints'\n</code></pre>"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html#run-test-vgpu-workload","title":"Run test vGPU workload\ud83d\udd17","text":"<p>We can run a sample workload on vGPU. To do so, create a YAML manifest file vgpu-pod.yaml, with the following contents:</p> <p>vgpu-pod.yaml</p> <pre><code>apiVersion: v1\nkind: Pod\nmetadata:\n name: gpu-pod\nspec:\n restartPolicy: Never\n containers:\n - name: cuda-container\n image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2\n resources:\n limits:\n nvidia.com/gpu: 1 # requesting 1 vGPU\n tolerations:\n - key: nvidia.com/gpu\n operator: Exists\n effect: NoSchedule\n - effect: NoSchedule\n key: node.3Engines.com/type\n operator: Equal\n value: gpu\n</code></pre> <p>Apply with:</p> <pre><code>kubectl apply -f vgpu-pod.yaml\n</code></pre> <p></p> <p>This pod will request one vGPU, so effectively it will utilize the vGPU allocated to a single node. For example, if you had a cluster with 2 vGPU-enabled nodes, you could run 2 pods requesting 1 vGPU each.</p> <p>Also, for scheduling the pods on GPU, you will need to apply the two tolerations as per the example above, That, effectively, means that the pod will only be scheduled on GPU nodes.</p> <p>Looking at the logs, we see that the workload was indeed performed:</p> <pre><code>kubectl logs gpu-pod\n\n[Vector addition of 50000 elements]\nCopy input data from the host memory to the CUDA device\nCUDA kernel launch with 196 blocks of 256 threads\nCopy output data from the CUDA device to the host memory\nTest PASSED\nDone\n</code></pre>"},{"location":"kubernetes/Deploying-vGPU-workloads-on-3Engines-Cloud-Kubernetes.html.html#add-non-gpu-nodegroup-to-a-gpu-first-cluster","title":"Add non-GPU nodegroup to a GPU-first cluster\ud83d\udd17","text":"<p>We refer to GPU-first clusters as the ones created with worker_type=gpu flag. For example, in cluster created with Scenario No. 3, the default nodegroup consists of vGPU nodes.</p> <p>In such clusters, to add an additional, non-GPU nodegroup, you will need to:</p> <ul> <li>specify the image ID of the system that manages this nodegroup</li> <li>add the label worker_type=default</li> <li>ensure that the flavor for this nodegroup is non-GPU.</li> </ul> <p>In order to retrieve the image ID, you need to know with which template you want to use to create the new nodegroup. Out of the existing non-GPU templates, we select k8s-1.23.16-v1.0.2 for this example. Run the following command to extract the template ID, as that will be needed for nodegroup creation:</p> <pre><code>openstack coe cluster \\\ntemplate show k8s-1.23.16-v1.0.2 | grep image_id\n</code></pre> <p>In our case, this yields the following result:</p> <p></p> <p>We can then add the non-GPU nodegroup with the following command, in which you can adjust the parameters. In our example, we use cluster name from Scenario 3 (the one freshly created with GPU) above and set up worker node flavor to eo1.medium:</p> <pre><code>export CLUSTER_ID=\"k8s-gpu-with_template\"\nexport IMAGE_ID=\"42696e90-57af-4124-8e20-d017a44d6e24\"\nopenstack coe nodegroup create $CLUSTER_ID default \\\n--labels \"worker_type=default\" \\\n--merge-labels \\\n--role worker \\\n--flavor \"eo1.medium\" \\\n--image $IMAGE_ID \\\n--node-count 1\n</code></pre> <p>Then list the nodegroup contents to see whether the creation succeeded:</p> <pre><code>openstack coe nodegroup list $CLUSTER_ID \\\n--max-width 120\n</code></pre> <p></p>"},{"location":"kubernetes/Enable-Kubeapps-app-launcher-on-3Engines-Cloud-Magnum-Kubernetes-cluster.html.html","title":"Enable Kubeapps app launcher on 3Engines Cloud Magnum Kubernetes cluster\ud83d\udd17","text":"<p>Kubeapps app-launcher enables quick deployments of applications on your Kubernetes cluster, with convenient graphical user interface. In this article we provide guidelines for creating Kubernetes cluster with Kubeapps feature enabled, and deploying sample applications.</p>"},{"location":"kubernetes/Enable-Kubeapps-app-launcher-on-3Engines-Cloud-Magnum-Kubernetes-cluster.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Brief background - deploying applications on Kubernetes</li> <li>Create a cluster with Kubeapps quick-launcher enabled</li> <li>Access Kubeapps service locally from browser</li> <li>Launch sample application from Kubeapps</li> <li>Current limitations</li> </ul>"},{"location":"kubernetes/Enable-Kubeapps-app-launcher-on-3Engines-Cloud-Magnum-Kubernetes-cluster.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>The resources that you require and use will reflect on the state of your account wallet. Check your account statistics at https://portal.3Engines.com/.</p> <p>No. 2 Create Kubernetes cluster from Horizon GUI</p> <p>Know how to create a Kubernetes cluster from Horizon GUI, as described in article How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p> <p>No. 3 How to Access Kubernetes cluster post-deployment</p> <p>Access to Linux command line and ability to access cluster, as described in article How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p> <p>No. 4 Handling Helm</p> <p>Some familiarity with Helm, to customize app deployments with Kubeapps. See Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud.</p> <p>No. 5 Access to 3Engines clouds</p> <p>Kubeapps is available on one of the clouds: WAW3-2, FRA1-2, WAW3-1.</p>"},{"location":"kubernetes/Enable-Kubeapps-app-launcher-on-3Engines-Cloud-Magnum-Kubernetes-cluster.html.html#background","title":"Background\ud83d\udd17","text":"<p>Deploying complex applications on Kubernetes becomes notably more efficient and convenient with Helm. Adding to this convenience, Kubeapps, an app-launcher with Graphical User Interface (GUI), provides a user-friendly starting point for application management. This GUI allows to deploy and manage applications on your K8s cluster, limiting the need for deep command-line expertise.</p> <p>Kubeapps app-launcher can be enabled during cluster creation time. It will run as a local service, accessible from browser.</p>"},{"location":"kubernetes/Enable-Kubeapps-app-launcher-on-3Engines-Cloud-Magnum-Kubernetes-cluster.html.html#create-kubernetes-cluster-with-kubeapps-quick-launcher-enabled","title":"Create Kubernetes cluster with Kubeapps quick-launcher enabled\ud83d\udd17","text":"<p>Creating Kubernetes cluster with Kubeapps enabled, follows the generic guideline described in Prerequisite No. 2.</p> <p>When creating the cluster in Horizon according to this guideline:</p> <ul> <li>insert three labels with below values in the \u201cAdvanced\u201d tab and</li> <li>choose to override the labels.</li> </ul> <pre><code>kubeapps_enabled=true,helm_client_tag=v3.11.3,helm_client_sha256=ca2d5d40d4cdfb9a3a6205dd803b5bc8def00bd2f13e5526c127e9b667974a89\n</code></pre> <p>Important</p> <p>There may be no spaces between label values.</p> <p>Inserting these labels is shown in the image below:</p> <p></p>"},{"location":"kubernetes/Enable-Kubeapps-app-launcher-on-3Engines-Cloud-Magnum-Kubernetes-cluster.html.html#access-kubeapps-service-locally-from-your-browser","title":"Access Kubeapps service locally from your browser\ud83d\udd17","text":"<p>Once the cluster is created, access the Linux console. You should have kubectl command line tool available, as specified in Prerequisite No. 3.</p> <p>Kubeapps service is enabled for the kubeapps-operator service account. We need to obtain the token that authenticates this service account with the cluster.</p> <p>To print the token, run the following command:</p> <pre><code>kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath='{.secrets[].name}') -o go-template='{{.data.token | base64decode}}' &amp;&amp; echo\n</code></pre> <p>As result, a long token will be printed, similar to the following:</p> <p></p> <p>Copy the token. Then run the following command to tunnel the traffic between your local machine and the Kubeapps service:</p> <pre><code>kubectl port-forward -n kube-system svc/magnum-apps-kubeapps 8080:80\n</code></pre> <p>Type localhost:8080 in your browser to access Kubeapps, paste the token copied earlier and click Submit:</p> <p></p> <p>You can now operate Kubeapps:</p> <p></p>"},{"location":"kubernetes/Enable-Kubeapps-app-launcher-on-3Engines-Cloud-Magnum-Kubernetes-cluster.html.html#launch-sample-application-from-kubeapps","title":"Launch sample application from Kubeapps\ud83d\udd17","text":"<p>Clicking on \u201cCatalog\u201d exposes a long list of applications available for downloads from Kubeapps app-store.</p> <p></p> <p>As an example, we will install the Apache webserver and in order to do so, click on the \u201cApache\u201d box. Note that Kubeapps interface is the graphical shortcut, which behind the scenes installs Helm chart on the cluster.</p> <p>Once you familiarize yourself with prerequisites and additional information about this chart, click Deploy in the top right corner:</p> <p></p> <p>The next screen with the default \u201cVisual Editor\u201d tab enabled, allows to define a few major adjustments to how the service is deployed e.g. specifying the service type or replica count. Access to more detailed configurations (reflecting Helm chart\u2019s values.yaml configuration file) is also available in the \u201cYAML editor\u201d GUI tab.</p> <p>To follow with the article do not change the defaults, only enter the Name of deployment (in our case apache-test) and hit Deploy with the available version:</p> <p></p> <p>Since we deployed a service of type LoadBalancer, we need to wait a few minutes for it to be deployed on the cloud. After this completes, we can see the screen confirming the deployment is complete:</p> <p></p> <p>Also, in the console, we can double-check that the Apache service, along with the deployment and pod, were properly deployed. Execute the following commands:</p> <pre><code>kubectl get deployments\nkubectl get pods\nkubectl get services\n</code></pre> <p>The results will be similar to this:</p> <p></p>"},{"location":"kubernetes/Enable-Kubeapps-app-launcher-on-3Engines-Cloud-Magnum-Kubernetes-cluster.html.html#current-limitations","title":"Current limitations\ud83d\udd17","text":"<p>Both Kubeapps and Helm charts deployed by this launcher are open-source projects, which are continuously evolving. The versions installed on 3Engines Cloud cloud provide a snapshot of this development, as a convenience feature.</p> <p>It is expected that not all applications can be installed with one-click and additional configuration will be needed in each particular case.</p> <p>One known limitation is that certain charts will require RWM (ReadWriteMany) persistent volume claims to properly operate. Currently, RWM persistent volumes are not natively available on 3Engines Cloud cloud. A workaround could be installing NFS server and deploying a StorageClass with RWM-supportive provisioner e.g. using nfs-subdir-external-provisioner project from GitHub.</p> <p>For NFS on Kubernetes cluster, see Create and access NFS server from Kubernetes on 3Engines Cloud.</p>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html","title":"GitOps with Argo CD on 3Engines Cloud Kubernetes\ud83d\udd17","text":"<p>Argo CD is a continuous deployment tool for Kubernetes, designed with GitOps and Infrastructure as Code (IaC) principles in mind. It automatically ensures that the state of applications deployed on a Kubernetes cluster is always in sync with a dedicated Git repository where we define such desired state.</p> <p>In this article we will demonstrate installing Argo CD on a Kubernetes cluster and deploying an application using this tool.</p>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Install Argo CD</li> <li>Access Argo CD from your browser</li> <li>Create Git repository and push your app deployment configurations</li> <li>Create and deploy Argo CD application resource</li> <li>View the deployed resources</li> </ul>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Kubernetes cluster</p> <p>How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p> <p>No. 3 Access to cluster with kubectl</p> <p>How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p> <p>No. 4 Familiarity with Helm</p> <p>Here is how to install and start using Helm charts:</p> <p>Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</p> <p>No. 5 Access to your own Git repository</p> <p>You can host the repository for this article on GitLab instance created in article Install GitLab on 3Engines Cloud Kubernetes. You may also use it with GitHub, GitLab and other source control platforms based on git.</p> <p>No. 6 git CLI operational</p> <p>git command installed locally. You may use it with GitHub, GitLab and other source control platforms based on git.</p> <p>No. 7 Access to exemplary Flask application</p> <p>You should have access to the example Flask application, to be downloaded from GitHub in the article. It will serve as an example of a minimal application and by changing it, we will demonstrate that Argo CD is capturing those changes in a continual manner.</p>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#step-1-install-argo-cd","title":"Step 1 Install Argo CD\ud83d\udd17","text":"<p>Let\u2019s install Argo CD first, under the following assumptions:</p> <ul> <li>this article has been tested on Kubernetes version 1.25</li> <li>use GUI only (no CLI used in this guide)</li> <li>deploy Argo CD without TLS certificates.</li> </ul> <p>Here is an in-depth installation guide.</p> <p>For production scenarios, it is recommended to apply TLS.</p> <p>Let\u2019s first create a dedicated namespace within our existing Kubernetes cluster. The namespace should be explicitly named argocd:</p> <pre><code>kubectl create namespace argocd\n</code></pre> <p>Then install Argo CD:</p> <pre><code>kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml\n</code></pre>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#step-2-access-argo-cd-from-your-browser","title":"Step 2 Access Argo CD from your browser\ud83d\udd17","text":"<p>The Argo CD web application by default is not accessible from the browser. To enable this, change the applicable service from ClusterIP to LoadBalancer type with the command:</p> <pre><code>kubectl patch svc argocd-server -n argocd -p '{\"spec\": {\"type\": \"LoadBalancer\"}}'\n</code></pre> <p>After 1-2 minutes, retrieve the IP address of the service:</p> <pre><code>kubectl get service argocd-server -n argocd\n</code></pre> <p>In our case, this provides the below result and indicates we have Argo CD running on IP address 185.254.233.247:</p> <p></p> <p>Type the IP address you extracted to your browser (it will be a different IP address in your case, so be sure to replace 185.254.233.247 cited here with your own address). You will expectedly get a warning of invalid certificate. To suppress the warning, click \u201cAdvanced\u201d and then \u201cProceed to Unsafe\u201d and be transferred to the login screen of Argo CD:</p> <p></p> <p>The login is admin. To get the password, extract it from the deployed Kubernetes secret with the following command:</p> <pre><code>kubectl get secret argocd-initial-admin-secret -n argocd -ojsonpath='{.data.password}' | base64 --decode ; echo\n</code></pre> <p>After typing in your credentials to the login form, you get transferred to the following screen:</p> <p></p>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#step-3-create-a-git-repository","title":"Step 3 Create a Git repository\ud83d\udd17","text":"<p>You need to create a git repository first. The state of the application on your Kubernetes cluster will be synced to the state of this repo. It is recommended that it is a separate repository from your application code, to avoid triggering the CI pipelines whenever we change the configuration.</p> <p>You will copy to this newly created repository files already available in (a different) GitHub repo mentioned in the Prerequisite No. 5 Git CLI operational.</p> <p>Create the repository first, we call ours argocd-sample. While filling in the form, check off the initialization with README and choose Public visibility:</p> <p></p> <p>In that view, project URL will be pre-filled and corresponding to the URL of your GitLab instance. In the place denoted with a blue rectangle, you should enter your user name; usually, it will be root but can be anything else. If there already are some users defined in GitLab, their names will appear in a drop-down menu.</p>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#step-4-download-flask-application","title":"Step 4 Download Flask application\ud83d\udd17","text":"<p>The next goal is to download two yaml files to a folder called ArgoCD-sample and its subfolder deployment.</p> <p>After submitting the \u201cCreate project\u201d form, you will receive a list of commands to work with your repo. Review them and switch to the CLI from Prerequisite No. 6. Clone the entire 3Engines K8s samples repo, then extract the sub-folder called Flask-K8s-deployment. For clarity, we rename its contents to a new folder, ArgoCD-sample. Use</p> <pre><code>mkdir ~/ArgoCD-sample\n</code></pre> <p>if this is the first time you are working through this article. Then apply the following set of commands:</p> <pre><code>git clone https://github.com/3Engines/K8s-samples\nmv ~/K8s-samples/Flask-K8s-deployment ~/ArgoCD-sample/deployment\nrm K8s-samples/ -rf\n</code></pre> <p>Files deployment.yaml and service.yaml deploy a sample Flask application on Kubernetes and expose it as a service. These are typical minimal examples for deployment and service and can be obtained from the 3Engines Kubernetes samples repository.</p>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#step-5-push-your-app-deployment-configurations","title":"Step 5 Push your app deployment configurations\ud83d\udd17","text":"<p>Then you need to upload files deployment.yaml and service.yaml files to the remote repository. Since you are using git, you perform the upload by syncing your local repo with the remote. First initiate the repo locally, then push the files to your remote with the following commands (replace to your own git repository instance):</p> <pre><code>cd ArgoCD-sample\ngit init\ngit remote add origin [email\u00a0protected]:root/ArgoCD-sample.git\ngit add .\ngit commit -m \"First commit\"\ngit push origin master\n</code></pre> <p>As a result, at this point, we have the two files available in remote repository, in deployment folder:</p> <p></p>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#step-6-create-argo-cd-application-resource","title":"Step 6 Create Argo CD application resource\ud83d\udd17","text":"<p>Argo CD configuration for a specific application is defined using an application custom resource. Such resource connects a Kubernetes cluster with a repository where deployment configurations are stored.</p> <p>Directly in the ArgoCD-sample folder, create file application.yaml, which will represent the application; be sure to replace gitlab.mysampledomain.info with your own domain.</p> <p>application.yaml</p> <pre><code>apiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\n name: myapp-application\n namespace: argocd\nspec:\n project: default\n syncPolicy:\n syncOptions:\n - CreateNamespace=true\n automated:\n selfHeal: true\n prune: true\n source:\n repoURL: https://gitlab.mysampledomain.info/root/argocd-sample.git\n targetRevision: HEAD\n path: deployment\n destination:\n server: https://kubernetes.default.svc\n namespace: myapp\n</code></pre> <p>Some explanations of this file:</p> spec.project.default Specifies that our application is associated with the default project (represented as appproject CRD in Kubernetes). Additional projects can be created and used for managing multiple applications. spec.syncPolicy.syncOptions.CreateNamespace=true Ensures that a namespace (specified in spec.destination.namespace) will be automatically created on our cluster if it does not exist already spec.syncPolicy.automated.selfHeal: true Ensures that any manual changes in the cluster (e.g. applied using kubectl) will trigger a synchronization with the Git repo, overwrite these manual changes and therefore ensure consistency between the cluster and the repo state. spec.syncPolicy.automated.prune: true Ensures that deletion of a resource definition in the repo will also delete this resource from the Kubernetes cluster spec.source.repoURL This is the URL of our git repository where deployment artifacts reside. spec.source.targetRevision.HEAD Ensures that Kubernetes cluster will be synced with the most recent update on the git repository. spec.source.source.path The name of the folder in the Git repository, where the yaml manifests are stored. spec.destination.server The address of the Kubernetes cluster where we deploy our app. Since this is the same cluster where Argo CD is running, it can be accessed using the cluster\u2019s internal DNS addressing. spec.destination.namespace The namespace in the cluster where the application will be deployed."},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#step-7-deploy-argo-cd-application","title":"Step 7 Deploy Argo CD application\ud83d\udd17","text":"<p>After we created the application.yaml file, the next step is to commit it and push to the remote repo. We can do this with the following commands:</p> <pre><code>git add -A\ngit commit -m \"Added application.yaml file\"\ngit push origin master\n</code></pre> <p>The final step is to apply the application.yaml configuration to the cluster with the command below:</p> <pre><code>kubectl apply -f application.yaml\n</code></pre>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#step-8-view-the-deployed-resources","title":"Step 8 View the deployed resources\ud83d\udd17","text":"<p>After performing the steps above, switch views to the Argo CD UI. We can see that our application appears on the list of applications and that the state to be applied on the cluster was properly captured from the Git repo. It will take a few minutes to complete the deployment of resources on the cluster:</p> <p></p> <p>This is the view of our app after deployment was properly applied:</p> <p></p> <p>After clicking on the application\u2019s box, we can also see the details of all the resources which contribute to this deployment, both high-level and low-level ones.</p> <p></p> <p>With the default settings, Argo CD will poll the Git repository every 3 minutes to capture the desired state of the cluster. If any changes in the repo are detected, the applications on the cluster will be automatically relaunched with the new configuration applied.</p>"},{"location":"kubernetes/GitOps-with-Argo-CD-on-3Engines-Cloud-Kubernetes.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<ul> <li>test applying changes to the deployment in the repository (e.g. commit a deployment with different image in the container spec), verify ArgoCD capturing the change and changing the cluster state</li> <li>customize the deployment of Argo CD to enable HTTPS</li> <li>integrate Argo CD with your identity management tool; for details, see Deploy Keycloak on Kubernetes with a sample app on 3Engines Cloud</li> </ul> <p>Also of interest would be the following article: CI/CD pipelines with GitLab on 3Engines Cloud Kubernetes - building a Docker image</p>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html","title":"HTTP Request-based Autoscaling on K8S using Prometheus and Keda on 3Engines Cloud\ud83d\udd17","text":"<p>Kubernetes pod autoscaler (HPA) natively utilizes CPU and RAM metrics as the default triggers for increasing or decreasing number of pods. While this is often sufficient, there can be use cases where scaling on custom metrics is preferred.</p> <p>KEDA is a tool for autoscaling based on events/metrics provided from popular sources/technologies such as Prometheus, Kafka, Postgres and multiple others.</p> <p>With this article we will deploy a sample app on 3Engines Cloud cloud. We will collect HTTP requests from NGINX Ingress on our Kubernetes cluster and, using Keda with Prometheus scaler, apply custom HTTP request-based scaling.</p> <p>Note</p> <p>We will use NGINX web server to demonstrate the app, and NGINX ingress to deploy it and collect metrics. Note that NGINX web server and NGINX ingress are two separate pieces of software, with two different purposes.</p>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Install NGINX ingress on Magnum cluster</li> <li>Install Prometheus</li> <li>Install Keda</li> <li>Deploy a sample app</li> <li>Deploy our app ingress</li> <li>Access Prometheus dashboard</li> <li>Deploy KEDA ScaledObject</li> <li>Test with Locust</li> </ul>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"No. 1 Account You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com. <p>No. 2 Create a new Kubernetes cluster without Magnum NGINX preinstalled from Horizon UI</p> <p>The default NGINX ingress deployed from Magnum from Horizon UI does not yet implement Prometheus metrics export. Instead of trying to configure Magnum ingress for this use case, we will rather install a new NGINX ingress. To avoid conflicts, best to follow the below instruction on a Kubernetes cluster without Magnum NGINX preinstalled from Horizon UI.</p> <p>No. 3 kubectl pointed to the Kubernetes cluster</p> <p>The following article gives options for creating a new cluster and activating the kubectl command:</p> <p>How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum.</p> <p>As mentioned, create the cluster without installing the NGINX ingress option.</p> <p>No. 4 Familiarity with deploying Helm charts</p> <p>This article will introduce you to Helm charts on Kubernetes:</p> <p>Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</p>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#install-nginx-ingress-on-magnum-cluster","title":"Install NGINX ingress on Magnum cluster\ud83d\udd17","text":"<p>Please type in the following commands to download the ingress-nginx Helm repo and then install the chart. Note we are using a custom namespace ingress-nginx as well as setting the options to enable Prometheus metrics.</p> <pre><code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx\nhelm repo update\n\nkubectl create namespace ingress-nginx\n\nhelm install ingress-nginx ingress-nginx/ingress-nginx \\\n--namespace ingress-nginx \\\n--set controller.metrics.enabled=true \\\n--set-string controller.podAnnotations.\"prometheus\\.io/scrape\"=\"true\" \\\n--set-string controller.podAnnotations.\"prometheus\\.io/port\"=\"10254\"\n</code></pre> <p>Now run the following command to get the external IP address of the ingress controller, which will be used by ingress resources created in the further steps of this article.</p> <pre><code>$ kubectl get services -n ingress-nginx\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ningress-nginx-controller LoadBalancer 10.254.118.18 64.225.135.67 80:31573/TCP,443:30786/TCP 26h\n</code></pre> <p>We get 64.225.135.67. Instead of that value, use the EXTERNAL-IP value you get in your terminal after running the above command.</p>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#install-prometheus","title":"Install Prometheus\ud83d\udd17","text":"<p>In order to install Prometheus, please apply the following command on your cluster:</p> <pre><code>kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/\n</code></pre> <p>Note that this is Prometheus installation customized for NGINX Ingress and already installs to the ingress-nginx namespace by default, so no need to provide the namespace flag or create one.</p>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#install-keda","title":"Install Keda\ud83d\udd17","text":"<p>With below steps, create a separate namespace for Keda artifacts, download the repo and install the Keda-Core chart:</p> <pre><code>kubectl create namespace keda\n\nhelm repo add kedacore https://kedacore.github.io/charts\nhelm repo update\n\nhelm install keda kedacore/keda --version 2.3.0 --namespace keda\n</code></pre>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#deploy-a-sample-app","title":"Deploy a sample app\ud83d\udd17","text":"<p>With the above steps completed, we can deploy a simple application. It will be an NGINX web server, serving a simple \u201cWelcome to nginx!\u201d page. Note, we create a deployment and then expose this deployment as a service of type ClusterIP. Create a file app-deployment.yaml in your favorite editor:</p> <p>app-deployment.yaml</p> <pre><code>apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx\nspec:\n selector:\n matchLabels:\n app: nginx\n replicas: 1\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx\nspec:\n selector:\n app: nginx\n type: ClusterIP\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80\n</code></pre> <p>Then apply with the below command:</p> <pre><code>kubectl apply -f app-deployment.yaml -n ingress-nginx\n</code></pre> <p>We are deploying this application into the ingress-nginx namespace where also the ingress installation and Prometheus is hosted. For production scenarios, you might want to have better isolation of application vs. infrastructure, this is however beyond the scope of this article.</p>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#deploy-our-app-ingress","title":"Deploy our app ingress\ud83d\udd17","text":"<p>Our application is already running and exposed in our cluster, but we want to also expose it publicly. For this purpose we will use NGINX ingress, which will also act as a proxy to register the request metrics. Create a file app-ingress.yaml with the following contents:</p> <p>app-ingress.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: app-ingress\n annotations:\n nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n ingressClassName: nginx\n rules:\n - host: \"64.225.135.67.nip.io\"\n http:\n paths:\n - backend:\n service:\n name: nginx\n port:\n number: 80\n path: /app\n pathType: Prefix\n</code></pre> <p>Then apply with:</p> <pre><code>kubectl apply -f app-ingress.yaml -n ingress-nginx\n</code></pre> <p>After a while, you can get a public IP address where the app is available:</p> <pre><code>$ kubectl get ingress -n ingress-nginx\nNAME CLASS HOSTS ADDRESS PORTS AGE\napp-ingress nginx 64.225.135.67.nip.io 64.225.135.67 80 18h\n</code></pre> <p>After typing the IP address with the prefix (replace with your own floating IP with /app suffix), we can see the app exposed. We are using the nip.io service, which works as a DNS resolver, so there is no need to set up DNS records for the purpose of the demo.</p> <p></p>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#access-prometheus-dashboard","title":"Access Prometheus dashboard\ud83d\udd17","text":"<p>To access Prometheus dashboard we can port-forward the running prometheus-server to our localhost. This could be useful for troubleshooting. We have the prometheus-server running as a NodePort service, which can be verified per below:</p> <pre><code>$ kubectl get services -n ingress-nginx\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ningress-nginx-controller LoadBalancer 10.254.3.172 64.225.135.67 80:30881/TCP,443:30942/TCP 26h\ningress-nginx-controller-admission ClusterIP 10.254.51.201 &lt;none&gt; 443/TCP 26h\ningress-nginx-controller-metrics ClusterIP 10.254.15.196 &lt;none&gt; 10254/TCP 26h\nnginx ClusterIP 10.254.160.207 &lt;none&gt; 80/TCP 25h\nprometheus-server NodePort 10.254.24.85 &lt;none&gt; 9090:32051/TCP 26h\n</code></pre> <p>We will port-forward to the localhost in the following command:</p> <pre><code>kubectl port-forward deployment/prometheus-server 9090:9090 -n ingress-nginx\n</code></pre> <p>Then enter localhost:9090 in your browser, you will see the Prometheus dashboard. In this view we will be able to see various metrics exposed by nginx-ingress. This can be verified by starting to type \u201cnginx-ingress\u201d to search bar, then various related metrics will start to show up.</p> <p></p>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#deploy-keda-scaledobject","title":"Deploy KEDA ScaledObject\ud83d\udd17","text":"<p>Keda ScaledObject is a custom resource which will enable scaling our application based on custom metrics. In the YAML manifest we define what will be scaled (the nginx deployment), what are the conditions for scaling, and the definition and configuration of the trigger, in this case Prometheus. Prepare a file scaled-object.yaml with the following contents:</p> <p>scaled-object.yaml</p> <pre><code>apiVersion: keda.sh/v1alpha1\nkind: ScaledObject\nmetadata:\n name: prometheus-scaledobject\n namespace: ingress-nginx\n labels:\n deploymentName: nginx\nspec:\n scaleTargetRef:\n kind: Deployment\n name: nginx # name of the deployment, must be in the same namespace as ScaledObject\n minReplicaCount: 1\n pollingInterval: 15\n triggers:\n - type: prometheus\n metadata:\n serverAddress: http://prometheus-server.ingress-nginx.svc.cluster.local:9090\n metricName: nginx_ingress_controller_requests\n threshold: '100'\n query: sum(rate(nginx_ingress_controller_requests[1m]))\n</code></pre> <p>For detailed definition of ScaledObject, refer to Keda documentation. In this example, we are leaving out a lot of default settings, most notable of which is called coolDownPeriod. Being not explicitly assigned a value, its default value of 300 seconds will be in effect, however, see below how you can change that value to something else.</p> <p>We are using here the nginx-ingress-controller-requests metric for scaling. This metric will only populate in the Prometheus dashboard once the requests start hitting our app service. We are setting the threshold for 100 and the time to 1 minute, so in case there is more requests than 100 per pod in a minute, this will trigger scale up.</p> <pre><code>kubectl apply -f scaled-object.yaml -n ingress-nginx\n</code></pre>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#test-with-locust","title":"Test with Locust\ud83d\udd17","text":"<p>We can now test whether the scaling works as expected. We will use Locust for this, which is a load testing tool. To quickly deploy Locust as LoadBalancer service type, enter the following commands:</p> <pre><code>kubectl create deployment locust --image paultur/locustproject:latest\nkubectl expose deployment locust --type LoadBalancer --port 80 --target-port 8089\n</code></pre> <p>After a couple of minutes the LoadBalancer is created and Locust is exposed:</p> <pre><code>$ kubectl get services\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.254.0.1 &lt;none&gt; 443/TCP 28h\nlocust LoadBalancer 10.254.88.89 64.225.132.243 80:31287/TCP 4m19s\n</code></pre> <p>Enter Locust UI in the browser using the EXTERNAL-IP. It can be only 64.225.132.243 or 64.225.132.243.nip.io, one of these values is sure to work. Then hit \u201cStart Swarming\u201d to initiate mock requests on our app\u2019s public endpoint:</p> <p></p> <p>With the default setting and even single user, Locust will start swarming hundreds of requests immediately. Tuning Locust is not in scope of this article, but we can quickly see the effect. The additional pod replicas are generated:</p> <pre><code>$ kubectl get pods -n ingress-nginx\nNAME READY STATUS RESTARTS AGE\ningress-nginx-controller-557bf68967-h9zf5 1/1 Running 0 27h\nnginx-85b98978db-2kjx6 1/1 Running 0 30s\nnginx-85b98978db-2kxzz 1/1 Running 0 61s\nnginx-85b98978db-2t42c 1/1 Running 0 31s\nnginx-85b98978db-2xdzw 0/1 ContainerCreating 0 16s\nnginx-85b98978db-2zdjm 1/1 Running 0 30s\nnginx-85b98978db-4btfm 1/1 Running 0 30s\nnginx-85b98978db-4mmlz 0/1 ContainerCreating 0 16s\nnginx-85b98978db-4n5bk 1/1 Running 0 46s\nnginx-85b98978db-525mq 1/1 Running 0 30s\nnginx-85b98978db-5czdf 1/1 Running 0 46s\nnginx-85b98978db-5kkgq 0/1 ContainerCreating 0 16s\nnginx-85b98978db-5rt54 1/1 Running 0 30s\nnginx-85b98978db-5wmdk 1/1 Running 0 46s\nnginx-85b98978db-6tc6p 1/1 Running 0 77s\nnginx-85b98978db-6zcdw 1/1 Running 0 61s\n...\n</code></pre>"},{"location":"kubernetes/HTTP-Request-based-Autoscaling-on-K8S-using-Prometheus-and-Keda-on-3Engines-Cloud.html.html#cooling-down","title":"Cooling down\ud83d\udd17","text":"<p>After hitting \u201cStop\u201d in Locust, the pods will scale down to one replica, in line with the value of coolDownPeriod parameter, which is defined in the Keda ScaledObject. Its default value is 300 seconds. If you want to change it, use command</p> <pre><code>kubectl edit scaledobject prometheus-scaledobject -n ingress-nginx\n</code></pre>"},{"location":"kubernetes/How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-OpenStack-Magnum.html.html","title":"How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum\ud83d\udd17","text":"<p>In this tutorial, you start with a freshly installed Kubernetes cluster on 3Engines OpenStack server and connect the main Kubernetes tool, kubectl to the cloud.</p>"},{"location":"kubernetes/How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-OpenStack-Magnum.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>How to connect kubectl to the OpenStack Magnum server</li> <li>How to access clusters with kubectl</li> </ul>"},{"location":"kubernetes/How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-OpenStack-Magnum.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Installation of kubectl</p> <p>Standard types of kubectl installation are described on Install Tools page of the official Kubernetes site.</p> <p>No. 3 A cluster already installed on Magnum site</p> <p>You may already have a cluster installed if you have followed one of these articles:</p> <ul> <li>With Horizon interface: How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum.</li> <li>With command line interface: How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum.</li> </ul> <ul> <li>Or, you may want to create a new cluster called k8s-cluster, just for this occasion \u2013 by using the following CLI command:</li> </ul> <pre><code>openstack coe cluster create \\\n--cluster-template k8s-stable-1.23.5 \\\n--labels eodata_access_enabled=false,floating-ip-enabled=true,master-lb-enabled=true \\\n--merge-labels \\\n--keypair sshkey \\\n--master-count 3 \\\n--node-count 2 \\\n--master-flavor eo1.large \\\n--flavor eo1.large \\\nk8s-cluster\n</code></pre> <p>Warning</p> <p>It takes some 10-20 minutes for the new cluster to form.</p> <p>In the rest of this text we shall use cluster name k8s-cluster \u2013 be sure to use the name of the existing cluster instead.</p> <p>No. 4 Connect openstack client to the cloud</p> <p>Prepare openstack and magnum clients by executing Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud from article How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon.</p>"},{"location":"kubernetes/How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-OpenStack-Magnum.html.html#the-plan","title":"The Plan\ud83d\udd17","text":"<ul> <li>Follow up the steps listed in Prerequisite No. 2 and install kubectl on the platform of your choice.</li> <li>Use the existing Kubernetes cluster on 3Engines or install a new one using the methods outlined in Prerequisites Nos. 3.</li> <li>Use Step 2 in Prerequisite No. 4 to enable connection of openstack and magnum clients to the cloud.</li> </ul> <p>You are then going to connect kubectl to the Cloud.</p>"},{"location":"kubernetes/How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-1-create-directory-to-download-the-certificates","title":"Step 1 Create directory to download the certificates\ud83d\udd17","text":"<p>Create a new directory called k8sdir into which the certificates will be downloaded:</p> <pre><code>mkdir k8sdir\n</code></pre> <p>Once the certificate file is downloaded, you will execute a command similar to this:</p> <pre><code>export KUBECONFIG=/home/dusko/k8sdir/config\n</code></pre> <p>This assumes</p> <ul> <li>using an Ubuntu environment (/home),</li> <li>that the user is dusko,</li> <li>the directory you just created /k8sdir and, finally, that</li> <li>config is the file which contains data for authorizing to the Kubernetes cluster.</li> </ul> <p>Note</p> <p>In Linux, a file may or may not have an extension, while on Windows, it must have an extension.</p>"},{"location":"kubernetes/How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-2a-download-certificates-from-the-server-using-the-cli-commands","title":"Step 2A Download Certificates From the Server using the CLI commands\ud83d\udd17","text":"<p>You will use command</p> <pre><code>openstack coe cluster config\n</code></pre> <p>to download the files that kubectl needs for authentication with the server. See its input parameters using the \u2013help parameter:</p> <pre><code>openstack coe cluster config --help\nusage: openstack coe cluster config [-h]\n [--dir &lt;dir&gt;] [--force] [--output-certs]\n [--use-certificate] [--use-keystone]\n &lt;cluster&gt;\n\nGet Configuration for a Cluster\n\npositional arguments:\n &lt;cluster&gt; The name or UUID of cluster to update\n\noptional arguments:\n -h, --help show this help message and exit\n --dir &lt;dir&gt; Directory to save the certificate and config files.\n --force Overwrite files if existing.\n --output-certs Output certificates in separate files.\n --use-certificate Use certificate in config files.\n --use-keystone Use Keystone token in config files.\n</code></pre> <p>Download the certificates into the k8sdir folder:</p> <pre><code>openstack coe cluster config \\\n--dir k8sdir \\\n--force \\\n--output-certs \\\nk8s-cluster\n</code></pre> <p>Four files will be downloaded into the folder:</p> <pre><code>ls k8sdir\nca.pem cert.pem config key.pem\n</code></pre> <p>Parameter \u2013output-certs produces .pem files, which are X.509 certificates, originally created so that they can be sent via email. File config combines the .pem files and contains all the information needed for kubectl to access the cloud. Using \u2013force overwrites the existing files (if any), so you are guaranteed to work with only the latest versions of the files from the server.</p> <p>The result of this command is shown in the row below:</p> <pre><code>export KUBECONFIG=/home/dusko/k8sdir/config\n</code></pre> <p>Copy this command and paste it into the command line of terminal, then press the Enter key on the keyboard to execute it. System variable KUBECONFIG will be thus initialized and the kubectl command will have access to the config file at all times.</p> <p>This is the entire procedure in terminal window:</p> <p></p>"},{"location":"kubernetes/How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-2b-download-certificates-from-the-server-using-horizon-commands","title":"Step 2B Download Certificates From the Server using Horizon commands\ud83d\udd17","text":"<p>You can download the config file from Horizon directly to your computer. First list the clusters with command Container Infra -&gt; Clusters, find the cluster and click on the rightmost drop-down menu in its column:</p> <p></p> <p>Click on option Show Cluster Config and the config file will be downloaded to the editor:</p> <p></p> <p>From the editor, save it on disk. The file name will combine the name of the cluster with the word config and if you have downloaded the same file several times, there may be a dash followed by a number, like this:</p> <pre><code>k8s-cluster-config-1.yaml\n</code></pre> <p>For uniformity, save it to the same folder k8sdir as the config file and set up the KUBECONFIG variable to that address:</p> <pre><code>export KUBECONFIG=/home/dusko/k8sdir/k8s-cluster_config-1.yaml\n</code></pre> <p>Depending on your environment, you may need to open a new terminal window to make the above command work.</p>"},{"location":"kubernetes/How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-3-verify-that-kubectl-has-access-to-the-cloud","title":"Step 3 Verify That kubectl Has Access to the Cloud\ud83d\udd17","text":"<p>See basic data about the cluster with the following command:</p> <pre><code>kubectl get nodes -o wide\n</code></pre> <p>The result is:</p> <p></p> <p>That verifies that kubectl has proper access to the cloud.</p> <p>To see available commands kubectl has, use:</p> <pre><code>kubectl --help\n</code></pre> <p>The listing is too long to reproduce here, but here is how it starts:</p> <p></p> <p>kubectl also has a long list of options, which are parameters that can be applied to any command. See them with</p> <pre><code>kubectl options\n</code></pre>"},{"location":"kubernetes/How-To-Access-Kubernetes-Cluster-Post-Deployment-Using-Kubectl-On-3Engines-Cloud-OpenStack-Magnum.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>With kubectl operational, you can</p> <ul> <li>deploy apps on the cluster,</li> <li>access multiple clusters,</li> <li>create load balancers,</li> <li>access applications in the cluster using port forwarding,</li> <li>use Service to access application in a cluster,</li> <li>list container images in the cluster</li> <li>use Services, Deployments and all other resources in a Kubernetes cluster.</li> </ul> <p>Kubernetes dashboard is a visual alternative to kubectl. To install it, see Using Dashboard To Access Kubernetes Cluster Post Deployment On 3Engines Cloud OpenStack Magnum.</p>"},{"location":"kubernetes/How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-3Engines-Cloud-OpenStack-Magnum.html.html","title":"How To Create API Server LoadBalancer for Kubernetes Cluster on 3Engines Cloud OpenStack Magnum\ud83d\udd17","text":"<p>Load balancer can be understood both as</p> <ul> <li>an external IP address through which the network / Internet traffic is coming into the Kubernetes cluster as well as</li> <li>the piece of software that decides to which of the master nodes to send the incoming traffic.</li> </ul> <p>There is an option to create load balancer while creating the Kubernetes cluster but you can also create it without. This article will show you how to access the cluster even if you did not specify load balancer at the creation time.</p>"},{"location":"kubernetes/How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-3Engines-Cloud-OpenStack-Magnum.html.html#what-we-are-going-to-do","title":"What We Are Going To Do\ud83d\udd17","text":"<ul> <li>Create a cluster called NoLoadBalancer with one master node and no load balancer</li> <li>Assign floating IP address to its master node</li> <li>Create config file to access the cluster</li> <li>In that config file, swap local server address with the actual floating IP of the master node</li> <li>Use parameter \u2013insecure-skip-tls-verify=true to override server security</li> <li>Verify that kubectl is working normally, which means that you have full access to the Kubernetes cluster</li> </ul>"},{"location":"kubernetes/How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-3Engines-Cloud-OpenStack-Magnum.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Installation of the openstack command</p> <p>To activate kubectl command, the openstack command from CLI OpenStack Interface must be operational. The first part of article How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum shows how to install it.</p> <p>No. 3 How to create Kubernetes cluster using Horizon commands</p> <p>The article How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum shows creation of clusters with Horizon visual interface. (In this article, you shall use it to create an exemplar cluster called NoLoadBalancer.)</p> <p>No. 4 Connect to the Kubernetes Cluster in Order to Use kubectl</p> <p>Article How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum will show you how to connect your local machine to the existing Kubernetes cluster.</p>"},{"location":"kubernetes/How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-3Engines-Cloud-OpenStack-Magnum.html.html#how-to-enable-or-disable-load-balancer-for-master-nodes","title":"How To Enable or Disable Load Balancer for Master Nodes\ud83d\udd17","text":"<p>A default state for the Kubernetes cluster in 3Engines Cloud OpenStack Magnum hosting is to have no load balancer set up in advance. You can decide to have a load balancer created together with the basic Kubernetes cluster by checking on option Enable Load Balancer for Master Nodes in window Network when creating a cluster through Horizon interface. (See Prerequisite No. 3 for the complete procedure.)</p> <p>The check box to enable load balancer for master nodes has two completely different meanings when checked and not checked.</p> <p>Checked state</p> <p></p> <p>If checked, the load balancer for master nodes will be created. If you specified two or more master nodes in previous screens, then this field must be checked.</p> <p>Regardless of the number of master nodes you have specified, checking this field on yields higher chances of successfully creating the Kubernetes cluster.</p> <p>Non-checked state</p> <p></p> <p>If you accept the default state of unchecked, no load balancer will be created. However, without any load balancer \u201cin front\u201d of the cluster, the cluster API is being exposed only within the Kubernetes network. You save on the existence of the load balancer but the direct connection from local machine to the cluster is lost.</p>"},{"location":"kubernetes/How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-3Engines-Cloud-OpenStack-Magnum.html.html#one-master-node-no-load-balancer-and-the-problem-it-all-creates","title":"One Master Node, No Load Balancer and the Problem It All Creates\ud83d\udd17","text":"<p>To show exactly what the problem is, use</p> <ul> <li>Prerequisite No. 2 to install openstack client for the local machine, so that you can use the openstack command.</li> <li>Then use Prerequisite No. 4 to connect to the OpenStack cloud and start using openstack command from the local terminal.</li> </ul> <p>Then you can try a very usual command such as</p> <pre><code>kubectl get nodes\n</code></pre> <p>but it will not work. If there were load balancer \u201cin front of the cluster\u201d, it would work, but here there isn\u2019t so it won\u2019t. The rest of this article will show you how to still make it work, using the fact that the master node of the cluster has its own load balancer for kube-api.</p>"},{"location":"kubernetes/How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-1-create-a-cluster-with-one-master-node-and-no-load-balancer","title":"Step 1 Create a Cluster With One Master Node and No Load Balancer\ud83d\udd17","text":"<p>Create cluster NoLoadBalancer as explained in Prerequisite No. 3. Let there be</p> <ul> <li>one master node and</li> <li>no load balancers (do not check field Enable Load Balancer for Master Nodes in subwindow Network).</li> <li>Use any key pair that you might have \u2013 it is of no concern for this article.</li> <li>Activate NGINX as Ingress controller</li> </ul> <p>The result will be creation of cluster NoLoadBalancer as seen in this image:</p> <p></p> <p>To illustrate the problem, a very basic command such as</p> <pre><code>kubectl get pods NoLoadBalancer -o yaml\n</code></pre> <p>to list the pods in cluster NoLoadBalancer, will show an error message like this one:</p> <pre><code>Unable to connect to the server: dial tcp 10.0.0.54:6443: i/o timeout\n</code></pre> <p>Addresses starting with 10.0\u2026 are usually reserved for local networks, meaning that no access from the Internet is enabled at this time.</p> <p></p>"},{"location":"kubernetes/How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-2-create-floating-ip-for-master-node","title":"Step 2 Create Floating IP for Master Node\ud83d\udd17","text":"<p>Here are the instances that serve as nodes for that cluster:</p> <p></p> <p>Master node is called noloadbalancer-3h2i5x5iz2u6-master-0 and click on the right side of its row and click option Associate Floating IP.</p> <p>To add the IP, click on a selection of available addresses (there may be only one but in certain cases, there can be several to choose from):</p> <p></p> <p>This is the result:</p> <p></p> <p>The IP number is 64.225.135.112 \u2013 you are going to use it later on, to change config file for access to the Kubernetes cluster.</p>"},{"location":"kubernetes/How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-3-create-config-file-for-kubernetes-cluster","title":"Step 3 Create config File for Kubernetes Cluster\ud83d\udd17","text":"<p>You are now going to connect to NoLoadBalancer cluster in spite of it not having a load balancer from the very start. To that end, create a config file to connect to the cluster, with the following command:</p> <pre><code>openstack coe cluster config NoLoadBalancer --force\n</code></pre> <p>It will return a row such as this:</p> <pre><code>export KUBECONFIG=/Users/&lt;YOUR PATH TO CONFIG FILE&gt;/config\n</code></pre> <p>Execute this command from terminal command line. A config file has also been created at that address. To show its contents, execute command</p> <pre><code>cat config\n</code></pre> <p>assuming you already are in the required folder.</p> <p>Config file will look a lot like gibberish because it contains certificates, tokens and other rows with random content, some of them hundreds of characters long. Here is one part of it:</p> <p></p> <p>The important row here is this network address:</p> <pre><code>server: https://10.0.0.54:6443\n</code></pre>"},{"location":"kubernetes/How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-4-swap-existing-floating-ip-address-for-the-network-address","title":"Step 4 Swap Existing Floating IP Address for the Network Address\ud83d\udd17","text":"<p>Now back to Horizon interface and execute commands Compute -&gt; Instances to see the addresses for master node of the NoLoadBalancer cluster:</p> <p></p> <p>There are two addresses:</p> <pre><code>10.0.0.54, 64.225.135.112\n</code></pre> <p>Incidentally, the same 10.0.0.54 address is also present in the config file, ending with port address :6443.</p> <p>Try now in terminal to execute kubectl command and see the result, perhaps like this one:</p> <p></p> <p>The access is there but the nodes and pods are still out of reach. That is because address 10.0.0.54 is internal network address for the cluster and was never supposed to work as the Internet address.</p> <p>So, open the config file using nano (or other text editor of your choice). Swap 10.0.0.54 for 64.225.135.112 in line for server access. The address 64.225.135.112 is the address of the floating IP for master node and will fit in perfectly.</p> <p>The line should look like this:</p> <p></p> <p>Save the edited file. In case of nano, those will be commands <code>Control-x</code>, <code>Y</code> and pressing <code>Enter</code> on the keyboard.</p>"},{"location":"kubernetes/How-To-Create-API-Server-LoadBalancer-for-Kubernetes-Cluster-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-4-add-parameter-insecure-skip-tls-verifytrue-to-make-kubectl-work","title":"Step 4 Add Parameter \u2013insecure-skip-tls-verify=true to Make kubectl Work\ud83d\udd17","text":"<p>Try again to activate kubectl and again it will fail. To make it work, add parameter \u2013insecure-skip-tls-verify=true:</p> <pre><code>kubectl get pods --insecure-skip-tls-verify=true\n</code></pre> <p>Or, try out a more meaningful command</p> <pre><code>kubectl get nodes --insecure-skip-tls-verify=true\n</code></pre> <p>This is the result of all these commands, in terminal window:</p> <p></p> <p>To continue working successfully, use normal kubectl commands and always add \u2013insecure-skip-tls-verify=true in the end.</p> <p>Attention</p> <p>Using parameter \u2013insecure-skip-tls-verify won\u2019t check cluster certificates for validity. That will make your https connections insecure. Not recommended for production environment. Use at your own risk, maybe for some local testing or when you are just learning about Kubernetes and clusters.</p> <p>For production, it is strongly recommended to check on field Enable Load Balancer for Master Nodes when creating a new cluster, regardless of the number of master nodes you have specified.</p>"},{"location":"kubernetes/How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.html","title":"How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon\ud83d\udd17","text":""},{"location":"kubernetes/How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.html#how-to-issue-commands-to-the-openstack-and-magnum-servers","title":"How To Issue Commands to the OpenStack and Magnum Servers\ud83d\udd17","text":"<p>There are three ways of working with Kubernetes clusters within Openstack Magnum and Horizon modules:</p> <p>Horizon Commands</p> <p>You issue Horizon commands using mouse and keyboard, through predefined screen wizards. It is the easiest way to start but not the most productive in the long run.</p> <p>Command Line Interface (CLI)</p> <p>CLI commands are issued from desktop computer or server in the cloud. This approach allows you to save commands as text and repeat them afterwards. This is the preferred way for professionals.</p> <p>HTTPS Requests to the Magnum Server</p> <p>Both the Horizon and the CLI use HTTPS requests internally and in an interactive manner. You can, however, write your own software to automate and/or change the state of the server, in real time.</p>"},{"location":"kubernetes/How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>How to install the CLI \u2013 OpenStack and Magnum clients</li> <li>How to connect the CLI to the Horizon server</li> <li>Basic examples of using OpenStack and Magnum clients</li> </ul>"},{"location":"kubernetes/How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.html#notes-on-python-versions-and-environments-for-installation","title":"Notes On Python Versions and Environments for Installation\ud83d\udd17","text":"<p>OpenStack is written in Python so you need to first install a Python working environment and then install the OpenStack clients. Officially, OpenStack runs only on Python 2.7 but you will most likely only be able to install a version 3.x of Python. During the installation, adjust accordingly the numbers of Python versions mentioned in the documentation.</p> <p>You will be able to install Python on any of the popular platforms, such as Windows, MacOS, Linux on a desktop computer. Or, supposing you are logged into Horizon interface, you can use commands Compute =&gt; Instances to create an instance of a virtual machine. Then install the Python there. Ubuntu 18.04 or 20.04 would serve best in this regard.</p> <p>Warning</p> <p>Once you install Kubernetes cluster you will have also have installed instances with Fedora 33 or 35, say, for the master node of the control plane. You can install the Python and OpenStack clients there as well but Ubuntu is much easier to use and is the preferred solution in this case.</p> <p>You can install the Python and the clients on several environments at once, say, on a desktop computer and on a virtual machine on the server, at the same time. Following the instructions in this tutorial, they will all be connected to one and the same Kubernetes cluster anyway.</p> <p>Note</p> <p>If you decide to install Python and the OpenStack clients on a virtual machine, you will need SSH keys in order to be able to enter the working environment. See How to create key pair in OpenStack Dashboard on 3Engines Cloud.</p>"},{"location":"kubernetes/How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Installation of OpenStack CLI on Ubuntu 20.04 Server</p> <p>The article How to install OpenStackClient for Linux on 3Engines Cloud shows how to install OpenStack client on Ubuntu server. That Ubuntu may be the desktop operating system, a virtual machine on some other operating system, or an Ubuntu server in the cloud.</p> <p>Installation on Mac OS will be similar to the installation on Ubuntu.</p> <p>No. 3 Installation of OpenStack CLI on Windows</p> <p>The article How to install OpenStackClient GitBash for Windows on 3Engines Cloud shows installation on Windows.</p> <p>No. 4 General Instructions for Installation of OpenStack Clients</p> <p>There are various ways of installing Python and the required clients. For instance, on MacOS, you can install the clients using Python PIP or install them natively, using homebrew.</p> <p>The article Install the OpenStack command-line clients will give a systematic introduction to installation of OpenStack family of clients on various operating systems.</p> <p>Once installed, the CLI commands will be identical across various platforms and operating systems.</p> <p>No. 5 Connect openstack command to the cloud</p> <p>After the successful installation of openstack command, it should be connected to the cloud. Follow this article for technical details: How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication.</p>"},{"location":"kubernetes/How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.html#step-1-install-the-cli-for-kubernetes-on-openstack-magnum","title":"Step 1 Install the CLI for Kubernetes on OpenStack Magnum\ud83d\udd17","text":"<p>In this step, you are going to install clients for commands openstack and coe, from modules OpenStack and Magnum, respectively.</p> <p>Follow the Prerequisites Nos. 2, 3 or 4 to install the main client for OpenStack. Its name is python-openstackclient and the installation described in those will typically contain a command such as</p> <pre><code>pip install python-openstackclient\n</code></pre> <p>If you have installed OpenStackClient using those prerequisite resources, we shall assume that the openstack is available and connected to the cloud.</p> <p>At the end of installation from either of the prerequisite articles, install Magnum client by issuing this command:</p> <pre><code>pip install python-magnumclient\n</code></pre>"},{"location":"kubernetes/How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.html#step-2-how-to-use-the-openstack-client","title":"Step 2 How to Use the OpenStack Client\ud83d\udd17","text":"<p>In this step, you are going to start using the OpenStack client you have installed and connected to the cloud.</p> <p>There are two ways of using the OpenStackClient. If you enter the word openstack at the command prompt of the terminal, you will enter the special command line interface, like this:</p> <p></p> <p>The benefit would be that you do not have to type openstack keyword for every command.</p> <p>Type quit to leave the openstack internal command line prompt.</p> <p>The preferred way, however, is typing the keyword openstack, followed by parameters and running from terminal command line.</p> <p>Openstack commands may have dozens of parameters so it is better to compose the command in an independent text editor and then copy and paste it into the terminal.</p>"},{"location":"kubernetes/How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.html#the-help-command","title":"The Help Command\ud83d\udd17","text":"<p>To learn about the available commands and their parameters, type \u2013help after the command. If applied to the keyword openstack itself, it will write out a very long list of commands, which may come useful as an orientation. It may start out like this:</p> <p></p> <p>This is how it ends:</p> <p></p> <p>The colon in the last line means that the output is in vi (or vim) editor. To leave it, type letter q and press Enter on the keyboard.</p> <p>Prerequisites No. 3 and 4 lead to official OpenStack user documentation.</p> <p>Here is what happens when you enter a wrong parameter, say, networks instead of network:</p> <pre><code>openstack networks list\n</code></pre> <p></p> <p>You get a list of commands similar to what you just typed.</p> <p>To list networks available in the system, use a singular version of the command:</p> <pre><code>openstack network list\n</code></pre> <p></p>"},{"location":"kubernetes/How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.html#step-4-how-to-use-the-magnum-client","title":"Step 4 How to Use the Magnum Client\ud83d\udd17","text":"<p>OpensStack command for the server is openstack but for Magnum, the command is not magnum as one would expect, but coe, for container orchestration engine. Therefore, the commands for clusters will always start with openstack coe.</p> <p>See cluster commands by entering</p> <pre><code>openstack coe\n</code></pre> <p>into the command line:</p> <p></p> <p>You can see the existing clusters using the following command:</p> <pre><code>openstack coe cluster list\n</code></pre> <p></p> <p>This is more or less the same information that you can get from the Horizon interface:</p> <p></p> <p>after clicking on Container Infra =&gt; Clusters.</p> <p>Prerequisite No. 5 offers more technical info about the Magnum client.</p>"},{"location":"kubernetes/How-To-Install-OpenStack-and-Magnum-Clients-for-Command-Line-Interface-to-3Engines-Cloud-Horizon.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>In this tutorial you have</p> <ul> <li>installed the OpenStack and Magnum clients</li> <li>connected them to the server, then used</li> <li>openstack command to access the server in general and</li> <li>coe to access the clusters in particular.</li> </ul> <p>The article How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum explains</p> <ul> <li>the advantages of using the CLI instead of Horizon interface, showing</li> <li>how to create a cluster template as well as</li> <li>how to create a new cluster</li> </ul> <p>all via the CLI.</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html","title":"How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum\ud83d\udd17","text":"<p>In this article you shall use Command Line Interface (CLI) to speed up testing and creation of Kubernetes clusters on OpenStack Magnum servers.</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>The advantages of using CLI over the Horizon graphical interface</li> <li>Debugging OpenStack and Magnum commands</li> <li>How to create a new Kubernetes cluster template using CLI</li> <li>How to create a new Kubernetes cluster using CLI</li> <li>Reasons why the cluster may fail to create</li> <li>CLI commands to delete a cluster</li> </ul>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Private and public keys</p> <p>An SSH key-pair created in OpenStack dashboard. To create it, follow this article How to create key pair in OpenStack Dashboard on 3Engines Cloud. You will have created keypair called sshkey and you will be able to use it for this tutorial as well.</p> <p>No. 3 Command Structure of OpenStack Client Commands</p> <p>Here is the manual for OpenStackClient commands: Command Structure Xena version.</p> <p>No. 4 Command List of OpenStack Client Commands</p> <p>These are all the commands supported by Xena release of OpenStackClient: Xena Command List.</p> <p>No. 5 Documentation for Magnum client</p> <p>These are all the commands supported by Xena release of MagnumClient: Magnum User Guide.</p> <p>No. 6 How to install OpenStack and Magnum Clients</p> <p>The step that directly precedes this article is: How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon.</p> <p>In that guide, you have installed the CLI and in this tutorial, you are going to use it to work with Kubernetes on OpenStack Magnum.</p> <p>No. 7 Autohealing of Kubernetes Clusters</p> <p>To learn more about autohealing of Kubernetes clusters, follow this official article What is Magnum Autohealer?.</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#the-advantages-of-using-the-cli","title":"The Advantages of Using the CLI\ud83d\udd17","text":"<p>You can use the CLI and Horizon interface interchangeably, but there are at least three advantages in using CLI.</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#reproduce-commands-through-cut-paste","title":"Reproduce Commands Through Cut &amp; Paste\ud83d\udd17","text":"<p>Here is a command to list flavors in the system</p> <pre><code>openstack flavor list\n</code></pre> <p></p> <p>If you have this line stored in text editor app, you can reproduce it at will. In contrast, to get the list of flavors using Horizon, you would have to click on a series of screen buttons</p> <p>Compute =&gt; Instances =&gt; Launch instance =&gt; Flavor</p> <p>and only then get the list of flavors to choose from:</p> <p></p> <p>A bonus is that keeping commands in a text editor automatically creates documentation for the server and cluster.</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#cli-commands-can-be-automated","title":"CLI Commands Can Be Automated\ud83d\udd17","text":"<p>You can use available automation. The result of the following Ubuntu pipeline is the url for communication from kubectl to the Kubernetes cluster:</p> <p></p> <p>There are two commands pipelined into one:</p> <pre><code>KUBERNETES_URL=$(openstack coe cluster show k8s-cluster\n | awk '/ api_address /{print $4}')\n</code></pre> <p>The result of the first command</p> <pre><code>openstack coe cluster show k8s-cluster\n</code></pre> <p>is a series of lines starting with the name of the parameter and followed by the actual value.</p> <p></p> <p>The second statement, to the right of the pipelining symbol |</p> <pre><code>awk '/ api_address /{print $4}')\n</code></pre> <p>is searching for the line starting with api_address and extracting its value https://64.225.132.135:6443. The final result is exported to the system variable KUBERNETES_URL, thus automatically setting it up for use by Kubernetes cluster command kubectl when accessing the cloud.</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#cli-yields-access-to-all-of-the-existing-openstack-and-magnum-parameters","title":"CLI Yields Access to All of the Existing OpenStack and Magnum Parameters\ud83d\udd17","text":"<p>CLI commands offer access to a larger set of parameters than is available through Horizon. For instance, in Horizon, the default length of time allowed for creation of a cluster is 60 minutes while in CLI, you can set it to other values of choice.</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#debugging-openstack-and-magnum-commands","title":"Debugging OpenStack and Magnum Commands\ud83d\udd17","text":"<p>To see what is actually happening behind the scenes, when executing client commands, add parameter \u2013debug:</p> <pre><code>openstack coe cluster list --debug\n</code></pre> <p>The output will be several screens long, consisting of GET and POST web calls, with dozens of parameters shown on screen. (The output is too voluminous to reproduce here.)</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#how-to-enter-openstack-commands","title":"How to Enter OpenStack Commands\ud83d\udd17","text":"<p>Note</p> <p>In the forthcoming example, a version fedora-coreos-34.20210904.3.0 of fedora images is used. As the system is updated in time, the actual values may become different, for instance, any of values fedora-coreos-35 or fedora-coreos-33.20210426.3.0. Use Horizon command Compute -&gt; Images to see what images of fedora are currently available, then edit and replace as needed.</p> <p>There are several ways to write down and enter Openstack commands into the terminal command line interface.</p> <p>One way is to enter command openstack and press Enter on the keyboard. You enter the line mode of the openstack command and can enter rows of various openstack parameters line after line. This is stricly for manual data entry and is difficult to automate.</p> <p></p> <p>Type quit and press Enter on keyboard to leave that mode.</p> <p>The usual way of entering openstack parameters is in one long line. Leave spaces between parameters but enter label values without any spaces inbetween. An example may be:</p> <p></p> <p>The line breaks and blanks have to be eradicated manually in this case.</p> <p>A more elegant way is to use backslash character, **, in line text. The character after backslash will not be taken into account so if you enter it at the very end of the line, the EOL character will be avoided and the first and the second line will be treated as one continuous line. That is exactly what you want, so here is what an entry line could look like with this approach:</p> <pre><code>openstack coe cluster template create kubecluster \\\n--image \"fedora-coreos-34.20210904.3.0\" \\\n--external-network external \\\n--master-flavor eo1.large \\\n--flavor eo1.large \\\n--docker-volume-size 50 \\\n--network-driver calico \\\n--docker-storage-driver overlay2 \\\n--master-lb-enabled \\\n--volume-driver cinder \\\n--labels boot_volume_type=,boot_volume_size=50,kube_tag=v1.18.2,availability_zone=nova \\\n--coe kubernetes -f value -c uuid\n</code></pre> <p>The end of each line is precented by a backslash and so all these lines appear as one (long) line to terminal command line scanner. However, when copying and pasting this to the terminal line, beware of the following situation:</p> <p></p> <p>If the blanks are present at the beginning of each line, that will be a problem. Eliminate them by going into any text editor and then removing them either manually or through replace function. What you need to have in text editor is this:</p> <p></p> <p>Now you can copy it and paste into the terminal command line:</p> <p></p> <p>You notice that the line with labels can become long and its right part may not be visible on screen. Use ** and new line to break the long \u2013labels** line into several shorter ones:</p> <p></p> <p>Pressing Enter on the keyboard activates this entire command and it is accepted by the system as you can see in the line below the command.</p> <p>Warning</p> <p>If you are new to Kubernetes please, at first, create clusters only directly using the default cluster template. Once you get more experience, you can start creating your own cluster templates and here is how to do it using CLI.</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#openstack-command-for-creation-of-cluster","title":"OpenStack Command for Creation of Cluster\ud83d\udd17","text":"<p>In this step you can create a new cluster using either the default cluster template or any of the templates that you have already created.</p> <p>Enter</p> <pre><code>openstack coe cluster create -h\n</code></pre> <p>to see the parameters. Provide all or almost all of the required parameters.</p> <pre><code>usage: openstack coe cluster create\n[-h]\n--cluster-template &lt;cluster-template&gt;\n[--discovery-url &lt;discovery-url&gt;]\n[--docker-volume-size &lt;docker-volume-size&gt;]\n[--labels &lt;KEY1=VALUE1,KEY2=VALUE2;KEY3=VALUE3...&gt;]\n[--keypair &lt;keypair&gt;]\n[--master-count &lt;master-count&gt;]\n[--node-count &lt;node-count&gt;]\n[--timeout &lt;timeout&gt;]\n[--master-flavor &lt;master-flavor&gt;]\n[--flavor &lt;flavor&gt;]\n&lt;name&gt;\n</code></pre> <p>Here is what one such command might actually look like:</p> <pre><code>openstack coe cluster create\n --cluster-template k8s-stable-1.23.5\n --docker-volume-size 50\n --labels eodata_access_enabled=false,floating-ip-enabled=true,\n --merge-labels\n --keypair sshkey\n --master-count 3\n --node-count 2\n --timeout 190\n --master-flavor eo1.large\n --flavor eo1.large\n newcluster\n</code></pre> <p>Warning</p> <p>When using the exemplar default cluster template, k8s-stable-1.23.5, there is no need to specify label master-lb-enabled=true as the master load balancer will always be created with the default cluster template. The only way to not have master load balancer created with the default template is to specify flag \u2013master-lb-disabled. Again, using master-lb-enabled=false with \u2013merge-labels applied afterwards, also will not work, i.e. will not prevent master LB from being created.</p> <p>Here are some special labels the functionality of which is only available through CLI and not through Horizon as well.</p> <p>How to properly form a cluster with auto healing turned on</p> <p>Note</p> <p>Prerequisite No. 6 will show you how to enable command line interface with your cloud server. Prerequisite No. 7 will give you a formal introduction to the notion of Kubernetes autohealing, as implemented in OpenStack Magnum.</p> <p>The only way to have auto healing turned on and guarantee at the same time that the cluster will be formed normally, is to set up the following label:</p> <pre><code>auto_healing_enabled=True\n</code></pre> <p>Warning</p> <p>Do not include the above label if you want to create a cluster that does not use auto healing.</p> <p>Here is a variation of the CLI command to generate a cluster. It will use medium values instead of large for flavors, will have only one master and one worker node, will have auto healing turned on etc.</p> <p>openstack coe cluster create \u2013cluster-template k8s-stable-1.23.5 \u2013labels floating-ip-enabled=true,master-lb-enabled=true,auto_healing_enabled=true \u2013merge-labels \u2013keypair sshkey \u2013master-count 1 \u2013node-count 1 \u2013master-flavor eo1.medium \u2013flavor eo1.medium newcluster</p> <p>Execute the command for creation of a cluster</p> <p>Copy and paste the above command into the terminal where OpenStack and Magnum clients are active:</p> <p></p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#how-to-check-upon-the-status-of-the-cluster","title":"How To Check Upon the Status of the Cluster\ud83d\udd17","text":"<p>The command to show the status of clusters is</p> <pre><code>openstack coe cluster list\n</code></pre> <p>newcluster is in status of CREATE_IN_PROGRESS i.e. it is being created under the hood. Repeat the command after a minute or two and see the latest status, which now is CREATE_FAILED. To see the reason why the creation of the cluster stopped, go to the Horizon interface, list the clusters and click on the name of newcluster.</p> <p>Under Stack, there is a message like this:</p> <pre><code>Resource CREATE failed: OverQuotaClient: resources.secgroup_kube_master: Quota exceeded for resources:\n['security_group_rule']. Neutron server returns request_ids: ['req-1aff5045-db64-4075-81df-80611db8cb6c']\n</code></pre> <p>The quota for the security group rules was exceeded. To verify, execute this command:</p> <pre><code>openstack quota show --default\n</code></pre> <p>The result may be too cluttered in a normal terminal window, so in this case, more information will be available from the Horizon interface:</p> <p></p> <p>Red and orange colors denote danger and you either have to ask support to double your quotas or delete the instances and clusters that have exceeded them.</p> <p>Note</p> <p>It is out of scope of this article to describe how to delete elements through Horizon interface. Make sure that quotas are available before new cluster creation.</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#failure-to-create-a-cluster","title":"Failure to Create a Cluster\ud83d\udd17","text":"<p>There are many reasons why a cluster may fail to create. Maybe the state of system quotas is not optimal, maybe there is a mismatch between the parameters of the cluster and the parameters in the rest of the cloud. For example, if you base the creation of cluster on the default cluster template, it will use Fedora distribution and require 10 GiB of memory. It may clash with \u2013docker-volume-size if that was set up to be larger then 10 GiB.</p> <p>The flavors for master and minions are eo1.large, and if you want a larger Docker image size, increase the \u2013master-flavor size.</p> <p>The entire cloud may be overloaded and the creation of cluster may take longer than the default 60 minutes. Set up the \u2013timeout parameter to 120 or 180 minutes in such cases.</p> <p>If the creation process failed prematurely, then</p> <ul> <li>review system quotas</li> <li>delete the failed cluster(s)</li> <li>review system quotas again</li> <li>change parameters and</li> <li>run the cluster creation command again.</li> </ul>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#cli-commands-to-delete-a-cluster","title":"CLI Commands to Delete a Cluster\ud83d\udd17","text":"<p>If the cluster failed to create, it is still taking up system resources. Delete it with command such as</p> <pre><code>openstack coe cluster delete\n</code></pre> <p>List the clusters and you will first see that the status is DELETE_IN_PROGRESS and, after a while, the newcluster will disappear.</p> <p>Now try to delete cluster largecluster. There are two of them, so putting up a command such as</p> <pre><code>openstack coe cluster delete largecluster\n</code></pre> <p>will not be accepted. Instead of the name, enter the uuid value:</p> <pre><code>openstack coe cluster delete e80c5815-d20b-4a2b-8588-49cf7a7e1aad\n</code></pre> <p>Again, the request will be accepted and then after a minute or two, the required cluster will disappear.</p> <p>Now there is only one largecluster so this will work:</p> <pre><code>openstack coe cluster delete largecluster\n</code></pre> <p>Deleting clusters that were not installed properly has freed up a significant amount of system resources. There are no more orange and red quotas:</p> <p></p> <p>In this step you have successfuly deleted the clusters whose creation has stopped prematurely, thus paving the way to the creation of the next cluster under slightly different circumstances.</p>"},{"location":"kubernetes/How-To-Use-Command-Line-Interface-for-Kubernetes-Clusters-On-3Engines-Cloud-OpenStack-Magnum.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>In this tutorial, you have used the CLI commands to generate cluster templates as well as clusters themselves. Also, if the cluster process failed, how to free up the system resources and try again.</p> <p>OpenStack and Magnum did heavy lifting for you, letting you create full fledged Kubernetes clusters with only a handful of CLI commands. The next step is to start working with the Kubernetes clusters directly. That means installing the kubectl command with article How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum and using it to install the apps that you want to run on Kubernetes clusters.</p>"},{"location":"kubernetes/How-to-Create-a-Kubernetes-Cluster-Using-3Engines-Cloud-OpenStack-Magnum.html.html","title":"How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum\ud83d\udd17","text":"<p>In this tutorial, you will start with an empty Horizon screen and end up running a full Kubernetes cluster.</p>"},{"location":"kubernetes/How-to-Create-a-Kubernetes-Cluster-Using-3Engines-Cloud-OpenStack-Magnum.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Creating a new Kubernetes cluster using one of the default cluster templates</li> <li>Visual interpretation of created networks and Kubernetes cluster nodes</li> </ul>"},{"location":"kubernetes/How-to-Create-a-Kubernetes-Cluster-Using-3Engines-Cloud-OpenStack-Magnum.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>The resources that you require and use will reflect on the state of your account wallet. Check your account statistics at https://portal.3Engines.com/ and if you are not going to use the cluster any more, remove them altogether to save resources costs.</p> <p>Magnum clusters created by certain users are bound together with an impersonation token and in the event of removing that user from the project, the cluster will lose authentication to Openstack API making cluster non-operational. A typical scenario would be for the tenant manager to create user accounts and let them create Kubernetes clusters. Later on, in this scenario, when the cluster is operational, the user would be removed from the project. The cluster would be present but the user could not, say, create new clusters, or persistent volume claims would be dysfunctional and so on.</p> <p>Therefore, good practice in creation of new Kubernetes clusters is to create a service account dedicated to creating a Magnum cluster. In essence, devote one account to one Kubernetes cluster, nothing more and nothing less.</p> <p>No. 2 Private and public keys</p> <p>An SSH key-pair created in OpenStack dashboard. To create it, follow this article How to create key pair in OpenStack Dashboard on 3Engines Cloud.</p> <p>The key pair created in that article is called \u201csshkey\u201d. You will use it as one of the parameters for creation of the Kubernetes cluster.</p>"},{"location":"kubernetes/How-to-Create-a-Kubernetes-Cluster-Using-3Engines-Cloud-OpenStack-Magnum.html.html#step-1-create-new-cluster-screen","title":"Step 1 Create New Cluster Screen\ud83d\udd17","text":"<p>Click on Container Infra and then on Clusters.</p> <p></p> <p>There are no clusters yet so click on button + Create Cluster on the right side of the screen.</p> <p></p> <p>On the left side and in blue color are the main options \u2013 screens into which you will enter data for the cluster. The three with the asterisks, Details, Size, and Network are mandatory; you must visit them and either enter new values or confirm the offered default values within each screen. When all the values are entered, the Submit button in the lower right corner will become active.</p> <p>Cluster Name</p> <p>This is your first cluster, name it just Kubernetes.</p> <p></p> <p>Cluster name cannot contain spaces. Using a name such as XYZ k8s Production will result in an error message, while a name such as XYZ-k8s-Production won\u2019t.</p> <p>Cluster Template</p> <p>Cluster template is a blueprint for base configuration of the cluster, where the version number reflects the Kubernetes version used.</p> <p>You immediately see how the cluster template is applied:</p> <p></p> <p>Availability Zone</p> <p>nova is the name of the related module in OpenStack and is the only option offered here.</p> <p>Keypair</p> <p>Assuming you have used Prerequisite No. 2, choose sshkey.</p> <p></p> <p>Addon Software - Enable Access to EO Data</p> <p>This field is specific to OpenStack systems that are developed by 3Engines hosting company. EODATA here means Earth Observation Data and refers to data gained from scientific satelites monitoring the Earth.</p> <p>Checking this field on, will install a network which will have access to the downloaded satelite data.</p> <p>If you are just trying to learn about Kubernetes on OpenStack, leave this option unchecked. And vice versa: if you want to go into production and use satellite data, turn it on.</p> <p>Note</p> <p>There is cluster template label called eodata_access_enabled=true which \u2013 if turned on \u2013 will have the same effect of creating a network for connecting to the EODATA.</p> <p>This is what the screen looks like when all the data have been entered:</p> <p></p> <p>Click on lower right button Next or on option Size from the left main menu of the screen to proceed to the next step of defining a Kubernetes cluster.</p>"},{"location":"kubernetes/How-to-Create-a-Kubernetes-Cluster-Using-3Engines-Cloud-OpenStack-Magnum.html.html#step-2-define-master-and-worker-nodes","title":"Step 2 Define Master and Worker Nodes\ud83d\udd17","text":"<p>In general terms, master nodes are used to host the internal infrastructure of the cluster, while the worker nodes are used to host the K8s applications.</p> <p>This is how this window looks before entering the data:</p> <p></p> <p>If there are any fields with default values, such as Flavor of Master Nodes and Flavor of Worker Nodes, these values were predefined in the cluster template.</p> <p>Number of Master Nodes</p> <p></p> <p>Kubernetes cluster has master and worker nodes. In real applications, a typical setup would be running 3 master nodes to ensure High Availability of the cluster\u2019s infrastructure. Here, you want to create your first cluster in a new environment so settle for just 1 master node.</p> <p>Flavor of Master Nodes</p> <p></p> <p>Select eo1.large for master node flavor.</p> <p>Number of Worker Nodes</p> <p></p> <p>Enter 3. This is for introductory purposes only, in real life the cluster can consist of multiple worker nodes. The cluster sizing guidelines are beyond the scope of this article.</p> <p>Flavor of Worker Nodes</p> <p>Again, choose eo1.large.</p> <p>Auto Scaling</p> <p></p> <p>When there is lot of demand for workers\u2019 services, the Kubernetes system can scale to using more worker nodes. Our sample setting is minimum 2 and maximum 4 master nodes. With this setting the number of nodes will be dynamically adjusted between these values, based on the ongoing load (number and resource requests of pods running K8S applications on the cluster).</p> <p>Here is what the screen Size looks like when all the data are entered:</p> <p></p> <p>To proceed, click on lower right button Next or on option Network from the left main menu.</p>"},{"location":"kubernetes/How-to-Create-a-Kubernetes-Cluster-Using-3Engines-Cloud-OpenStack-Magnum.html.html#step-3-defining-network-and-loadbalancer","title":"Step 3 Defining Network and LoadBalancer\ud83d\udd17","text":"<p>This is the last of mandatory screens and the blue Submit button in the lower right corner is now active. (If it is not, use screen button Back to fix values in previous screens.)</p> <p></p> <p>Enable Load Balancer for Master Nodes</p> <p>This option will be automatically checked, when you selected more than one master node. Using multiple master nodes ensures High Availability of the cluster infrastructure, and in such case the Load Balancer will be then necessary to distribute the traffic between masters.</p> <p>If you selected only one master node, which might be relevant in non-production scenarios e.g. testing, you will still have an option to either add or skip the Load Balancer. Note that using a LoadBalancer with one master node is still a relevant option, as this option will allow to access the cluster from outside of the cluster network. With no such option selected you will need to rely on SSH access to the master.</p> <p>Create New Network</p> <p>This box comes turned on, meaning that the system will create a network just for this cluster. Since Kubernetes clusters need subnets for inter-communications, a related subnetwork will be firstly created and then used further down the road.</p> <p>It is strongly recommended to use automatic creation of network when creating a new cluster.</p> <p>However, turning the checkbox off discloses an option to use an existing network as well.</p> <p>Use an Existing Network</p> <p>Using an existing network is a more advanced option. You would need to first create a network dedicated to this cluster in OpenStack along with the necessary adjustments. Creation of such a custom network is beyond the scope of this article. Note you should not use the network of another cluster, project network or EODATA network.</p> <p>If you have an existing network and you would like to proceed, you will need to choose the network and the subnet from the dropdown below:</p> <p></p> <p>Both fields have an asterisk behind them, meaning you must specify a concrete value in each of the two fields.</p> <p>Cluster API</p> <p>The setting of \u201cAvailable on public internet\u201d implies that floating IPs will be assigned to both master and worker nodes. This option is usually redundant and has security concerns. Unless you have a specific requirement, leave this option on \u201cprivate\u201d setting. Then you can always assign floating IPs to required nodes from the \u201cCompute\u201d section in Horizon.</p> <p>Ingress Controller</p> <p>Use of ingress is a more advanced feature, related to load balancing the traffic to the Kubernetes applications.</p> <p>If you are just starting with Kubernetes, you will rather not require this feature immediately, so you could leave this option out.</p>"},{"location":"kubernetes/How-to-Create-a-Kubernetes-Cluster-Using-3Engines-Cloud-OpenStack-Magnum.html.html#step-4-advanced-options","title":"Step 4 Advanced options\ud83d\udd17","text":"<p>Option Management</p> <p></p> <p>There is just one option in this window, Auto Healing and its field Automatically Repair Unhealthy Nodes.</p> <p>Node is a basic unit of Kubernetes cluster and the Kubernetes systems software will automatically poll the state of each cluster; if not ready or not available, the system will replace the unhealthy node with a healthy one \u2013 provided, of course, that this field is checked on.</p> <p>If this is your first time trying out the formation of Kubernetes clusters, auto healing may not be of interest to you. In production, however, auto healing should always be on.</p> <p>Option Advanced</p> <p></p> <p>Option Advanced allows for entering of so-called labels, which are named parameters for the Kubernetes system. Normally, you don\u2019t have to enter anything here.</p> <p>Labels can change how the cluster creation is performed. There is a set of labels, called the Template and Workflow Labels, that the system sets up by default. If this check box is left as is, that is, unchecked, the default labels will be used unchanged. That guarantees that the cluster will be formed with all of the essential parameters in order. Even if you add your own labels, as shown in the image above, everything will still function.</p> <p>If you turn on the field I do want to override Template and Workflow Labels and if you use any of the Template and Workflow Labels by name, they will be set up the way you specified. Use this option very rarely, if at all, and only if you are sure of what you are doing.</p>"},{"location":"kubernetes/How-to-Create-a-Kubernetes-Cluster-Using-3Engines-Cloud-OpenStack-Magnum.html.html#step-5-forming-of-the-cluster","title":"Step 5 Forming of the Cluster\ud83d\udd17","text":"<p>Once you click on Submit button, OpenStack will start creating the Kubernetes cluster for you. It will show a cloud message with green background in the upper right corner of the windows, stating that the creation of the cluster has been started.</p> <p>Cluster generation usually takes from 10 to 15 minutes. It will be automatically abandoned if duration time is longer than 60 minutes.</p> <p>If there is any problem with creation of the cluster, the system will signal it in various ways. You may see a message in the upper right corner, with a red background, like this:</p> <p></p> <p>Just repeat the process and in most cases you will proceed to the following screen:</p> <p></p> <p>Click on the name of the cluster, Kubernetes, and see what it will look like if everything went well.</p> <p></p>"},{"location":"kubernetes/How-to-Create-a-Kubernetes-Cluster-Using-3Engines-Cloud-OpenStack-Magnum.html.html#step-6-review-cluster-state","title":"Step 6 Review cluster state\ud83d\udd17","text":"<p>Here is what OpenStack Magnum created for you as the result of filling in the data in those three screens:</p> <ul> <li>A new network called Kubernetes, complete with subnet, ready to connect further.</li> <li>New instances \u2013 virtual machines that serve as nodes.</li> <li>A new external router.</li> <li>New security groups, and of course</li> <li>A fully functioning Kubernetes cluster on top of all these other elements.</li> </ul> <p>You can observe that the number of nodes in the cluster was initially 3, but after a while the cluster auto-scaled itself to 2. This is expected and is the result of autoscaler, which detected that our cluster is mostly still idle in terms of application load.</p> <p>There is another way which we can view our cluster setup and inspect any deviations from required state. Click on Network in the main menu and then on Network Topology. You will see a real time graphical representation of the network. As soon as the one of the cluster elements is added, it will be shown on screen.</p> <p></p> <p>Also in the Horizon\u2019s \u201cCompute\u201d panel you can see the virtual machines which were created for master and worker nodes:</p> <p></p> <p>Node names start with kubernetes because that is the name of the cluster in lower case.</p> <p>Resources tied up from one attempt of creating a cluster are not automatically reclaimed when you again attempt to create a new cluster. Therefore, several attempts in a row will lead to a stalemate situation, in which no cluster will be formed until all of the tied up resources are freed up.</p>"},{"location":"kubernetes/How-to-Create-a-Kubernetes-Cluster-Using-3Engines-Cloud-OpenStack-Magnum.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You now have a fully operational Kubernetes cluster. You can</p> <ul> <li>use ready-made Docker images to automate installation of apps,</li> <li>activate the Kubernetes dashboard and watch the state of the cluster online</li> </ul> <p>and so on.</p> <p>Here are some relevant articles:</p> <p>Read more about ingress here: Using Kubernetes Ingress on 3Engines Cloud OpenStack Magnum</p> <p>Article How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum shows how to use command line interface to create Kubernetes clusters.</p> <p>To access your newly created cluster from command line, see article How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum.</p>"},{"location":"kubernetes/How-to-create-Kubernetes-cluster-using-Terraform-on-3Engines-Cloud.html.html","title":"How to create Kubernetes cluster using Terraform on 3Engines Cloud\ud83d\udd17","text":"<p>In this article we demonstrate using Terraform to deploy an OpenStack Magnum Kubernetes cluster on 3Engines Cloud cloud.</p>"},{"location":"kubernetes/How-to-create-Kubernetes-cluster-using-Terraform-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting account</p> <p>You need an active 3Engines Cloud account https://portal.3Engines.com/.</p> <p>No. 2 Active CLI session with OpenStackClient for Linux</p> <p>You need an OpenStack CLI installed and the respective Python virtual environment sourced. For guidelines see:</p> <p>How to install OpenStackClient for Linux on 3Engines Cloud</p> <p>It will show you how to install Python, create and activate a virtual environment, and then connect to the cloud by downloading and activating the proper RC file from the 3Engines Cloud cloud.</p> <p>No. 3 Connect to the cloud via an RC file</p> <p>Another article, How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication, deals with connecting to the cloud and is covering either of the one- or two-factor authentication procedures that are enabled on your account. It also covers all the main platforms: Linux, MacOS and Windows.</p> <p>You will use both the Python virtual environment and the downloaded RC file after Terraform has been installed.</p> <p>No. 4 Familiarity with creating Kubernetes clusters</p> <p>Familiarity with creating Kubernetes clusters in a standard way e.g. using Horizon or OpenStack CLI:</p> <p>How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p> <p>How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum</p> <p>No. 5 Terraform operational</p> <p>Have Terraform installed locally or on a cloud VM - installation guidelines along with further information can be found in this article:</p> <p>Generating and authorizing Terraform using Keycloak user on 3Engines Cloud</p> <p>After you finish working through that article, you will have access to the cloud via an active openstack command. Also, special environmental (env) variables (OS_USERNAME, OS_PASSWORD, OS_AUTH_URL and others) will be set up so that various programs can use them \u2013 Terraform being the prime target here.</p>"},{"location":"kubernetes/How-to-create-Kubernetes-cluster-using-Terraform-on-3Engines-Cloud.html.html#define-provider-for-terraform","title":"Define provider for Terraform\ud83d\udd17","text":"<p>Terraform uses the notion of provider, which represents your concrete cloud environment and covers authentication. 3Engines Cloud clouds are built complying with OpenStack technology and OpenStack is one of the standard types of providers for Terraform.</p> <p>We need to:</p> <ul> <li>instruct Terraform to use OpenStack as a provider type</li> <li>provide credentials which will to point to our own project and user in the cloud.</li> </ul> <p>Assuming you have worked through Prerequisite No. 2 (download and source the RC file), several OpenStack-related environment variables will be populated in your local system. The ones pointing to your OpenStack environment start with OS, e.g. OS_USERNAME, OS_PASSWORD, OS_AUTH_URL. When we define OpenStack as TerraForm provider type, Terraform will know to automatically use these env variables to authenticate.</p> <p>Let\u2019s define the Terraform provider now by creating file provider.tf with the following contents:</p> <p>provider.tf</p> <pre><code># Define providers\nterraform {\nrequired_version = \"&gt;= 0.14.0\"\n required_providers {\n openstack = {\n source = \"terraform-provider-openstack/openstack\"\n version = \"~&gt; 1.35.0\"\n }\n }\n}\n\n# Configure the OpenStack Provider\nprovider \"openstack\" {\n auth_url = \"https://keystone.3Engines.com:5000/v3\"\n # the rest of configuration parameters are taken from environment variables once RC file is correctly sourced\n}\n</code></pre> <p>The auth_url is the only configuration option that shall be provided in the configuration file, despite it also being available within the environment variables.</p> <p>Having this provider spec allows us to create a cluster in the following steps, but can also be reused to create other resources in your OpenStack environment e.g. virtual machines, volumes and many others.</p>"},{"location":"kubernetes/How-to-create-Kubernetes-cluster-using-Terraform-on-3Engines-Cloud.html.html#define-cluster-resource-in-terraform","title":"Define cluster resource in Terraform\ud83d\udd17","text":"<p>The second step is to define the exact specification of a resource that we want to create with Terraform. In our case we want to create a OpenStack Magnum cluster. In Terraform terminology, it will be an instance of openstack_containerinfra_cluster_v1 resource type. To proceed, create file cluster.tf which contains the specification of our cluster:</p> <p>cluster.tf</p> <pre><code># Create resource\nresource \"openstack_containerinfra_cluster_v1\" \"k8s-cluster\" {\n name = \"k8s-cluster\"\n cluster_template_id = \"524535ed-9a0f-4b70-966f-6830cdc52604\"\n node_count = 3\n master_count = 3\n flavor = \"eo1.large\"\n master_flavor = \"hmad.medium\"\n keypair = \"mykeypair\"\n labels = {\n eodata_access_enabled = true\n etcd_volume_size = 0\n }\n merge_labels = true\n}\n</code></pre> <p>The above setup reflects a cluster with some frequently used customizations:</p> cluster_template_id corresponds to the ID of one of default cluster templates in WAW3-2 cloud, which is k8s-localstorage-1.23.16-v1.0.0. The default templates and their IDs can be looked up in Horizon UI interface in the submenu Cluster Infra -\u2192 Container Templates. node_count, node_flavor, master_node_count, master_node_flavor correspond intuitively to count and flavor of master and worker nodes in the cluster. keypair reflects the name of keypair used in our openstack project in the chosen cloud labels and merge_labels <p>We use two labels:</p> eodata_access_enabled=true ensures that EODATA network with fast access to satellite images is connected to our cluster nodes, etcd_volume_size=0 which ensures that master nodes are properly provisioned with NVME local storage. <p>With this configuration, it is mandatory to also use configuration merge_labels=true to properly apply these labels and avoid overwriting them by template defaults.</p> <p>In our example we operate on WAW3-2 cloud, where flavor hmad.medium is available. If using another cloud, adjust the parameters accordingly.</p> <p>The above configuration reflects a cluster where loadbalancer is placed in front of the master nodes, and where this loadbalancer\u2019s flavor is HA-large. Customizing this default, similarly as with other more advanced defaults, would require creating a custom Magnum template, which is beyond the scope of this article.</p>"},{"location":"kubernetes/How-to-create-Kubernetes-cluster-using-Terraform-on-3Engines-Cloud.html.html#apply-the-configurations-and-create-the-cluster","title":"Apply the configurations and create the cluster\ud83d\udd17","text":"<p>Once both Terraform configurations described in previous steps are defined, we can apply them to create our cluster.</p> <p>The first step is to have both files provider.tf and cluster.tf available in a dedicated folder. Then cd to this folder and type:</p> <pre><code>terraform init\n</code></pre> <p>This command will initialize our cluster deployment. It will capture any formal errors with authentication to OpenStack, which might need correcting before moving to the next stage.</p> <p></p> <p>As the next step, Terraform will plan the actions it needs to perform to create the resource. Proceed with typing:</p> <pre><code>terraform plan\n</code></pre> <p>The result is shown below and gives a chance to correct any logical errors to our expected setup:</p> <p></p> <p>The last step is to apply the planned changes. Perform this step with the command:</p> <pre><code>terraform apply\n</code></pre> <p>The output of this last command will initially repeat the plan, then ask to enter word yes to set the Terraform into action.</p> <p>Upon confirming with yes, the action is deployed and the console will update every 10 seconds to give a \u201cStill creating \u2026\u201d check until our cluster is created.</p> <p>The final lines of the output after successfully provisioning the cluster, should read similar to the below:</p> <p></p>"},{"location":"kubernetes/How-to-create-Kubernetes-cluster-using-Terraform-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Terraform can be used also to deploy additional applications to our cluster e.g. using Helm provider for Terraform. Check Terraform documentation for more details.</p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html","title":"How to install Rancher RKE2 Kubernetes on 3Engines Cloud\ud83d\udd17","text":"<p>RKE2 - Rancher Kubernetes Engine version 2 - is a Kubernetes distribution provided by SUSE. Running a self-managed RKE2 cluster in 3Engines Cloud cloud is a viable option, especially for those seeking smooth integration with Rancher platform and customization options.</p> <p>An RKE2 cluster can be provisioned from Rancher GUI. However, in this article we use Terraform, which enables streamlined, automated cluster creation. We also use OpenStack Cloud Controller Manager (CCM) to integrate RKE2 cluster with the wider OpenStack environment. Using the customized version of CCM enables us to take advantage of 3Engines Cloud cloud-native features. The end result is</p> <ul> <li>a provisioned RKE2 cluster</li> <li>running under OpenStack, with</li> <li>an integrated OpenStack Cloud Controller Manager.</li> </ul> <p>We also illustrate the coding techniques used, in case you want to enhance the RKE2 implementation further.</p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Perform the preliminary setup</li> </ul> <ul> <li>Create new project</li> <li>Create application credentials</li> <li>Have keypair operational</li> <li>Authenticate to the newly formed project</li> </ul> <ul> <li>Use Terraform configuration for RKE2 from 3Engines\u2019s GitHub repository</li> <li>Provision an RKE2 cluster</li> <li>Demonstrate the incorporated cloud-native load-balancing</li> <li>Implementation details</li> <li>Further customization</li> </ul> <p>The code is tested on Ubuntu 22.04.</p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Terraform available on your local command line</p> <p>See Generating and authorizing Terraform using Keycloak user on 3Engines Cloud</p> <p>No. 3 Python virtual environment sourced</p> <p>How to install Python virtualenv or virtualenvwrapper on 3Engines Cloud</p> <p>No. 4 OpenStack CLI installed locally</p> <p>When installed, you will have access to openstack command and will be able to communicate with the OpenStack cloud:</p> <p>How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication</p> <p>No. 5 kubectl tool installed locally</p> <p>Standard types of kubectl installation are described on Install Tools page of the official Kubernetes site.</p> <p>No. 6 Available key pair in OpenStack</p> <p>How to create key pair in OpenStack Dashboard on 3Engines Cloud.</p> <p>No. 7 Application credentials</p> <p>The following article describes how to create and use application credentials, using CLI:</p> <p>How to generate or use Application Credentials via CLI on 3Engines Cloud</p> <p>In this article, we shall create application credentials through Horizon but with a specific selection of user roles.</p> <p>No. 8 Projects, roles, users and groups</p> <p>Option Identity lists available projects, roles, users and groups. See What is an OpenStack project on 3Engines Cloud</p> <p>No. 9 Experience with Kubernetes and Helm</p> <p>To follow up on this article, you should know your way around Kubernetes in general. Having the actual experience of using it on 3Engines Cloud cloud, would be even better. For a series of article on Kubernetes, see KUBERNETES.</p> <p>To perform the installation required in this article, one of the steps will be to create Helm CRD and use it. This article shows the basics of using Helm Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud.</p> <p>No. 10 Cloud Controller Manager</p> <p>Within a general Kubernetes environment, the Cloud Controller Manager (CCM) allows Kubernetes to integrate with cloud provider APIs. It abstracts cloud-specific logic and manages and synchronizes resources between Kubernetes and the underlying cloud infrastructure. Also, it provides controllers for Nodes, Routes, Services and Volumes.</p> <p>Under OpenStack, CCM integrates with OpenStack APIs. The code used here is from a concrete repository for Cloud Controller Manager \u2013 https://github.com/kubernetes/cloud-provider-openstack It implements the above mentioned (as well as) other OpenStack-Kubernetes integrations.</p> <p>No. 11 rke2-terraform repository</p> <p>You will need to download the following repository</p> <p>https://github.com/3Engines/K8s-samples/tree/main/rke2-terraform</p> <p>in order to install install Terraform manifests for provisioning of RKE2 on 3Engines Cloud using Terraform.</p> <p>No. 12 Customize the cloud configuration for Terraform</p> <p>One of the files downloaded from the above link will be variables.tf. It contains definitions of region, cluster name and many other variables. The default value for region is WAW3-2 so customize it for your own cloud.</p> <p></p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#step-1-perform-the-preliminary-setup","title":"Step 1 Perform the preliminary setup\ud83d\udd17","text":"<p>Our objective is to create a Kubernetes cluster, which runs in the cloud environment. RKE2 software packages will be installed on cloud virtual machines playing roles of Kubernetes master and worker nodes. Also, several other OpenStack resources will be created along.</p> <p>As part of the preliminary setup to provision these resources we will:</p> <ul> <li>Create a dedicated OpenStack project to isolate all resources dedicated to the cluster</li> <li>Create application credentials</li> <li>Ensure a key pair is enabled for the project</li> <li>Source locally the RC file for this project</li> </ul> <p>We here provide the instruction to install the project, credentials, key pair and source locally the RC file.</p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#preparation-step-1-create-new-project","title":"Preparation step 1 Create new project\ud83d\udd17","text":"<p>First step is to create a new project use Horizon UI. Click on Identity \u2192 Projects. Fill in the name of the project on the first tab:</p> <p></p> <p>In the second tab, ensure that the user you operate with is added as a project member with: \u201cmember\u201d, \u201cload-balancer_member\u201d and \u201ccreator\u201d roles.</p> <p></p> <p>Then click on \u201cCreate Project\u201d. Once the project is created, switch to the context of this project from top left menu:</p> <p></p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#preparation-step-2-create-application-credentials","title":"Preparation step 2 Create application credentials\ud83d\udd17","text":"<p>The next step is to create an application credential that will be used to authenticate the OpenStack Cloud Controller Manager (used for automated load balancer provisioning). To create one, go to menu Identity \u2192 Application Credentials. Fill in the form as per the below example, passing all available roles (\u201cmember\u201d, \u201cload-balancer_member\u201d, \u201ccreator\u201d, \u201creader\u201d) roles to this credential. Set the expiry date to a date in the future.</p> <p></p> <p>After clicking on Create Application Credential, copy both application ID and credential secret in a safe place. The window will be only displayed once, so the best solution is to download files openrc and clouds.yaml, which will both contain the required values.</p> <p></p> <p>Prerequisite No. 7 contains a complete guide to application credentials.</p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#preparation-step-3-keypair-operational","title":"Preparation step 3 Keypair operational\ud83d\udd17","text":"<p>Before continuing, ensure you have a keypair available. If you already had a keypair in your main project, this keypair will be available also for the newly created project. If you do not have one yet, create it from the left menu Project \u2192 Compute \u2192 Key Pairs. For additional details, visit Prerequisite No. 6.</p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#preparation-step-4-authenticate-to-the-newly-formed-project","title":"Preparation step 4 Authenticate to the newly formed project\ud83d\udd17","text":"<p>Lastly, download the RC file corresponding to the new project from Horizon GUI, then source this file in your local Linux terminal. See Prerequisite No. 4.</p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#step-2-use-terraform-configuration-for-rke2-from-3enginess-github-repository","title":"Step 2 Use Terraform configuration for RKE2 from 3Engines\u2019s GitHub repository\ud83d\udd17","text":"<p>We added folder rke2-terraform to 3Engines\u2019s K8s-samples GitHub repository, from Prerequisite No. 11. This project includes configuration files to provision an RKE2 cluster on 3Engines clouds and can be used as a starter pack for further customizations to your specific requirements.</p> <p></p> <p>In this section, we briefly introduce this repository, explaining the content and purpose of the specific configuration files. These files are the actual commands to Terraform and are defined in its standard files, with the extension .tf.</p> variables.tf Contains key variables that specify configuration of our cluster e.g. number of worker nodes, cloud region where the cluster will be placed, name of the cluster. Most of these variables have their default values set and you can modify these defaults directly in the file. The variables with no defaults (secret, sensitive data) should have their values provided separately, via the use of tfvars file, which is explained in the next section. providers.tf Used for declaring and configuring Terraform providers. In our case, we only use OpenStack provider, which is provisioning cloud resources that form the cluster. main.tf Contains declaration of resources to be created by Terraform. Several OpenStack resources are required to form a cluster e.g. a Network, Subnet, Router, Virtual Machines and others. Review the file for details and customize to your preference. security-groups.tf Contains declaration of security groups and security group rules used in OpenStack to open specific ports on virtual machines forming the cluster. Thus, the communication from selected sources gets enabled on each VM. Modify the file to customize. cloud-init-masters.yml.tpl and cloud-init-workers.yml.tpl <p>These two are template files used to create cloud-init files, which in turn are used for bootstrapping the created virtual machines:</p> <ul> <li>ensuring certain packages are installed on these VMs,</li> <li>creating and running scripts on them etc.</li> </ul> <p>The content of these templates gets populated based on the user-data section in virtual machine declarations in main.conf.</p> <p>One of the primary functions of each cloud-init file is to install rke2 on both master and worker nodes.</p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#step-3-provision-an-rke2-cluster","title":"Step 3 Provision an RKE2 cluster\ud83d\udd17","text":"<p>Let\u2019s provision an RKE2 Kubernetes cluster now. This will consist of the following steps:</p> <ul> <li>Clone the github repository</li> <li>Adjust the defaults in variables.tf</li> <li>Create file terraform.tfvars, with secrets</li> <li>Initialize, plan and apply the Terraform configurations</li> <li>Use the retrieved kubeconfig to access the cluster with kubectl</li> </ul> <p>The first step is to clone the github repository. We clone the entire repo but just leave the rke2-terraform folder with the below commands:</p> <pre><code>git clone https://github.com/3Engines/K8s-samples\nmkdir ~/rke2-terraform\nmv ~/K8s-samples/rke2-terraform/* ~/rke2-terraform\nrm K8s-samples/ -rf\ncd rke2-terraform\n</code></pre> <p>As mentioned in Prerequisite No. 12, inspect and eventually change the value of the default settings in variables.tf e.g. change the name of the cluster, cloud region or virtual machine settings.</p> <p>In our case, we stick to the defaults.</p> <p>Note</p> <p>Highly available control plane is currently not covered by this repository. Also, setting number of master nodes to a value other than 1 is not supported.</p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#enter-data-in-file-terraformtfvars","title":"Enter data in file terraform.tfvars\ud83d\udd17","text":"<p>The next step is to create file terraform.tfvars, with the following contents:</p> <pre><code>ssh_keypair_name = \"your_ssh_keypair_name\"\nproject_id = \"your_project_id\"\npublic_key = \"your_public_key\"\napplication_credential_id = \"your_app_credential_id\"\napplication_credential_secret = \"your_app_credential_secret\"\n</code></pre> Get ssh_keypair_name Choose one from the list shown after Compute -&gt; Key Pairs. Get project_id To get project_id, the easiest way is to list all of the projects with Identity -&gt; Projects, click on project name and read the ID. Get public_key To get public_key, execute Compute -&gt; Key Pairs and click on the name of the keypair name you have entered for variable ssh_keypair_name. Get application_credential_id Get application credential ID from one of the files openrc or clouds.yaml. Get application_credential_secret The same, only for secret."},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#run-terraform-to-provision-rke2-cluster","title":"Run Terraform to provision RKE2 cluster\ud83d\udd17","text":"<p>This completes the set up part. We can now run the standard Terraform commands - init, plan and apply - to create our RKE2 cluster. The commands should be executed in the order provided below. Type yes when required to reconfirm the steps planned by Terraform.</p> <pre><code>terraform init\nterraform plan\nterraform apply\n</code></pre> <p>The provisioning will take a few minutes (apx. 5-10 minutes for a small cluster). Logs will be printed to console confirming creation of each resource. Here is a sample final output from the terraform apply command:</p> <p></p> <p>As a part of the provisioning process, the kubeconfig file kubeconfig.yaml will be copied to your local working directory. Export the environment variable pointing your local kubectl installation to this kubeconfig location (replace the path in the sample command below):</p> <pre><code>export KUBECONFIG=/path_to_your_kubeconfig_file/kubeconfig.yaml\n</code></pre> <p>Then check whether the cluster is available with:</p> <pre><code>kubectl get nodes\n</code></pre> <p>We can see that the cluster is provisioned correctly in our case, with both master and worker nodes being Ready:</p> <p></p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#step-4-demonstrate-cloud-native-integration-covered-by-the-repo","title":"Step 4 Demonstrate cloud-native integration covered by the repo\ud83d\udd17","text":"<p>We can verify the automated provisioning of load balancers and public Floating IP by exposing a service of type LoadBalancer. The following kubectl commands will deploy and expose an nginx server in our RKE2 cluster\u2019s default namespace:</p> <pre><code>kubectl create deployment nginx-deployment --image=nginx:latest\nkubectl expose deployment nginx-deployment --type=LoadBalancer --port=80 --target-port=80\n</code></pre> <p>It takes around 2-3 minutes for the FIP and LoadBalancer to be provisioned. When you run this command:</p> <pre><code>kubectl get services\n</code></pre> <p>After this time, you should see the result similar to the one below, where EXTERNAL-IP got properly populated:</p> <p></p> <p>Similarly, you could verify the presence of the created load balancer in the Horizon interface via the left menu: Project \u2192 Network \u2192 LoadBalancers</p> <p></p> <p>and Project \u2192 Network \u2192 Floating IPs:</p> <p></p> <p>Ultimately, we can check the service is running as a public service in our browser with the assigned floating IP:</p> <p></p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#implementation-details","title":"Implementation details\ud83d\udd17","text":"<p>Explaining all of the techniques that went into production of RKE2 repository from Prerequisite No. 11 is out of scope of this article. However, here is an illustration of how at least one feature was implemented.</p> <p>Let us examine the cloud-init-masters.yml.tpl file, concretely, the part between line numbers 53 and 79:</p> <pre><code>- path: /var/lib/rancher/rke2/server/manifests/rke2-openstack-cloud-controller-manager.yaml\n permissions: \"0600\"\n owner: root:root\n content: |\n apiVersion: helm.cattle.io/v1\n kind: HelmChart\n metadata:\n name: openstack-cloud-controller-manager\n namespace: kube-system\n spec:\n chart: openstack-cloud-controller-manager\n repo: https://kubernetes.github.io/cloud-provider-openstack\n targetNamespace: kube-system\n bootstrap: True\n valuesContent: |-\n nodeSelector:\n node-role.kubernetes.io/control-plane: \"true\"\n cloudConfig:\n global:\n auth-url: https://keystone.3Engines.com:5000\n application-credential-id: \"${application_credential_id}\"\n application-credential-secret: \"${application_credential_secret}\"\n region: ${region}\n tenant-id: ${project_id}\n loadBalancer:\n floating-network-id: \"${floating_network_id}\"\n subnet-id: ${subnet_id}\n</code></pre> <p>It covers creating a yaml definition of a HelmChart CRD</p> <p>rke2-openstack-cloud-controller-manager.yaml</p> <p>in location</p> <p>/var/lib/rancher/rke2/server/manifests/</p> <p>on the master node. Upon cluster creation, RKE2 provisioner automatically captures this file and deploys a pod responsible for provisioning such load balancers. This can be verified by checking the pods in the kube-system namespace:</p> <pre><code>kubectl get pods -n kube-system\n</code></pre> <p>One of the entries is the aforementioned pod:</p> <pre><code>NAME READY STATUS RESTARTS AGE\n...\nopenstack-cloud-controller-manager-bz7zt 1/1 Running 1 (4h ago) 26h\n...\n</code></pre>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#further-customization","title":"Further customization\ud83d\udd17","text":"<p>Depending on your use case, further customization to the provided sample repository will be required to tune the Terraform configurations to provision an RKE2 cluster. We suggest evaluating the following enhancements:</p> <ul> <li>Incorporate High Availability of the Control Plane</li> <li>Integrate with CSI Cinder to enable automated provisioning of block storage with the Persistent Volume Claims (PVCs)</li> <li>Integrate NVIDIA device plugin for enabling native integration of VMs with vGPUs.</li> <li>Implement node autoscaler to complement the Kubernetes-native Horizontal Pod Autoscaler (HPA)</li> <li>Implement affinity and anti-affinity rules for placement of worker and master nodes</li> </ul> <p>To implement these features, you would need to simultaneously adjust definitions for both Terraform and Kubernetes resources. Covering those steps is, therefore, outside of scope of this article.</p>"},{"location":"kubernetes/How-to-install-Rancher-RKE2-Kubernetes-on-3Engines-Cloud-cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>In this article, you have created a proper Kubernetes solution using RKE2 cluster as a foundation.</p> <p>You can also consider creating Kubernetes clusters using Magnum within OpenStack:</p> <p>How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html","title":"Implementing IP Whitelisting for Load Balancers with Security Groups on 3Engines Cloud\ud83d\udd17","text":"<p>In this article we describe how to use commands in Horizon, CLI and Terraform to secure load balancers for Kubernetes clusters in OpenStack by implementing IP whitelisting.</p>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#what-are-we-going-to-do","title":"What Are We Going To Do\ud83d\udd17","text":""},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#introduction","title":"Introduction\ud83d\udd17","text":"<p>Load balancers without proper restrictions are vulnerable to unauthorized access. By implementing IP whitelisting, only specified IP addresses are permitted to access the load balancer. You decide from which IP address it is possible to access the load balancers in particular and the Kubernetes cluster in general.</p>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 List of IP addresses/ranges to whitelist</p> <p>This is the list of IP addresses that you want the load balancer to be able to listen to.</p> <p>No. 3 A preconfigured load balancer</p> <p>In OpenStack, each time you create a Kubernetes cluster, the corresponding load balancers are created automatically.</p> <p>See article How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p> <p>No. 4 OpenStack command operational</p> <p>This is a necessary for CLI procedures.</p> <p>This boils down to sourcing the proper RC file from Horizon. See How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum</p> <p>No. 5 Python Octavia Client</p> <p>To operate Load Balancers with CLI, the Python Octavia Client (python-octaviaclient) is required. It is a command-line client for the OpenStack Load Balancing service. Install the load-balancer (Octavia) plugin with the following command from the Terminal window, on Ubuntu 22.04:</p> <pre><code>pip install python-octaviaclient\n</code></pre> <p>Or, if you have virtualenvwrapper installed:</p> <pre><code>mkvirtualenv python-octaviaclient\npip install python-octaviaclient\n</code></pre> <p>Depending on the environment, you might need to use variants such as python3, pip3 and so on.</p> <p>No. 6 Terraform installed</p> <p>You will need Terraform version 1.50 or higher to be operational.</p> <p>For complete introduction and installation of Terrafom on OpenStack see article Generating and authorizing Terraform using Keycloak user on 3Engines Cloud</p> <p>To use Terraform in this capacity, you will need to authenticate to the cloud using application credentials with unrestricted access. Check article How to generate or use Application Credentials via CLI on 3Engines Cloud</p>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#horizon-whitelisting-load-balancers","title":"Horizon: Whitelisting Load Balancers\ud83d\udd17","text":"<p>We will whitelist load balancers by restricting the relevant ports in their security groups. In Horizon, use command Network \u2013&gt; Load Balancers to see the list of load balancers:</p> <p></p> <p>Let us use load balancer with the name starting with gitlab. There is no direct connect from load balancer to security groups, so we first have to identify an instance which corresponds to that load balancer. Use commands Project \u2013&gt; Compute \u2013&gt; Instances and search for instances containing gitlab in its name:</p> <p></p> <p>Edit the security groups of those instances \u2013 for each instance, go to the Actions menu and select Edit Security Groups.</p> <p></p> <p>Filter by gitlab:</p> <p></p> <p>Use commands Project \u2013&gt; Network \u2013&gt; Security Groups to list security groups with gitlab in its name:</p> <p></p> <p>Choose which one you are going to edit; alternatively, you can create a new security group. Anyways, be sure to enter the following data:</p> <ul> <li>Direction: Ingress</li> <li>Ether Type: IPv4</li> <li>Protocol: TCP</li> <li>Port Range: Specify the port range used by your load balancer.</li> <li>Remote IP Prefix: Enter the IP address or CIDR to whitelist.</li> </ul> <p>Save and apply the changes.</p>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#verification","title":"Verification\ud83d\udd17","text":"<p>To confirm the configuration:</p> <ol> <li>Go to the Instances section in Horizon.</li> <li>View the security groups applied to the load balancers\u2019 associated instances.</li> <li>Ensure the newly added rule is visible.</li> </ol>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#cli-whitelisting-load-balancers","title":"CLI: Whitelisting Load Balancers\ud83d\udd17","text":"<p>The OpenStack CLI provides a command-line method for implementing IP whitelisting.</p> <p>Be sure to work through Prerequisites Nos 4 and 5 in order to have openstack command fully operational.</p> <p>List the security groups associated with the load balancer:</p> <pre><code>openstack loadbalancer show &lt;LOAD_BALANCER_NAME_OR_ID&gt;\n</code></pre> <p>Identify the pool associated with the load balancer:</p> <pre><code>openstack loadbalancer pool list\n</code></pre> <p>Show details of the pool to list its members:</p> <pre><code>openstack loadbalancer pool show &lt;POOL_NAME_OR_ID&gt;\n</code></pre> <p>Note the IP addresses of the pool members and identify the instances hosting them.</p> <p>Create a security group for IP whitelisting:</p> <pre><code>openstack security group create &lt;SECURITY_GROUP_NAME&gt;\n</code></pre> <p>Add rules to the security group:</p> <pre><code>openstack security group rule create \\\n--ingress \\\n--ethertype IPv4 \\\n--protocol tcp \\\n--dst-port &lt;PORT_RANGE&gt; \\\n--remote-ip &lt;IP_OR_CIDR&gt; \\\n&lt;SECURITY_GROUP_ID&gt;\n</code></pre> <p>Apply the security group to the instances hosting the pool members:</p> <pre><code>openstack server add security group &lt;INSTANCE_ID&gt; &lt;SECURITY_GROUP_NAME&gt;\n</code></pre>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#verification_1","title":"Verification\ud83d\udd17","text":"<p>Verify the applied security group rules:</p> <pre><code>openstack security group show &lt;SECURITY_GROUP_ID&gt;\n</code></pre> <p>Confirm the security group is attached to the appropriate instances:</p> <pre><code>openstack server show &lt;INSTANCE_ID&gt;\n</code></pre>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#terraform-whitelisting-load-balancers","title":"Terraform: Whitelisting Load Balancers\ud83d\udd17","text":"<p>Terraform is an Infrastructure as Code (IaC) tool that can automate the process of configuring IP whitelisting.</p> <p>Create a security group and whitelist rule in main.tf:</p> <pre><code># main.tf\n\n# Security Group to Whitelist IPs\nresource \"openstack_networking_secgroup_v2\" \"whitelist_secgroup\" {\n name = \"loadbalancer_whitelist\"\n description = \"Security group for load balancer IP whitelisting\"\n}\n\n# Add Whitelist Rule for Specific IPs\nresource \"openstack_networking_secgroup_rule_v2\" \"allow_whitelist\" {\n direction = \"ingress\"\n ethertype = \"IPv4\"\n protocol = \"tcp\"\n port_range_min = 80 # Replace with actual port range\n port_range_max = 80\n remote_ip_prefix = \"192.168.1.0/24\" # Replace with actual CIDR\n security_group_id = openstack_networking_secgroup_v2.whitelist_secgroup.id\n}\n\n# Existing Instances Associated with Pool Members\nresource \"openstack_compute_instance_v2\" \"instances\" {\n count = 2 # Adjust to the number of pool member instances\n name = \"pool_member_${count.index + 1}\"\n flavor_id = \"m1.small\" # Replace with an appropriate flavor\n image_id = \"image-id\" # Replace with a valid image ID\n key_pair = \"your-key-pair\"\n security_groups = [openstack_networking_secgroup_v2.whitelist_secgroup.name]\n network {\n uuid = \"network-uuid\" # Replace with the UUID of your network\n }\n}\n\n# Associate the Load Balancer with Security Group via Instances\nresource \"openstack_lb_loadbalancer_v2\" \"loadbalancer\" {\n name = \"my_loadbalancer\"\n vip_subnet_id = \"subnet-id\" # Replace with the subnet ID\n depends_on = [openstack_compute_instance_v2.instances]\n}\n</code></pre> <p>Initialize and apply the configuration:</p> <pre><code>terraform init\nterraform apply\n</code></pre> <p>Verification</p> <p>Use Terraform to review the applied state:</p> <pre><code>terraform show\nopenstack server show &lt;INSTANCE_ID&gt;\nopenstack security group show &lt;SECURITY_GROUP_ID&gt;\n</code></pre>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#state-of-security-before-and-after-whitelisting-the-balancers","title":"State of Security: Before and after whitelisting the balancers\ud83d\udd17","text":"<p>Before implementing IP whitelisting, the load balancer accepts traffic from all sources. After completing the procedure:</p> <ul> <li>Only specified IPs can access the load balancer.</li> <li>Unauthorized access attempts are denied.</li> </ul>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#verification-tools","title":"Verification Tools\ud83d\udd17","text":"<p>Various tools can ensure the protection is installed and active:</p> livez Kubernetes monitoring endpoint. nmap (free): For port scanning and access verification. curl (free): To confirm access control from specific IPs. Wireshark (free): For packet-level analysis."},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#testing-with-nmap","title":"Testing with nmap\ud83d\udd17","text":"<pre><code>nmap -p &lt;PORT&gt; &lt;LOAD_BALANCER_IP&gt;\n</code></pre>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#testing-with-http-and-curl","title":"Testing with http and curl\ud83d\udd17","text":"<pre><code>curl http://&lt;LOAD_BALANCER_IP&gt;\n</code></pre>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#testing-with-curl-and-livez","title":"Testing with curl and livez\ud83d\udd17","text":"<p>This would be a typical response before changes:</p> <pre><code>curl -k https://&lt;KUBE_API_IP&gt;:6443/livez?verbose\n[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[+]poststarthook/apiservice-discovery-controller ok\nlivez check passed\n</code></pre> <p>And, this would be a typical response after the changes:</p> <pre><code>curl -k https://&lt;KUBE_API_IP&gt;:6443/livez?verbose -m 5\ncurl: (28) Connection timed out after 5000 milliseconds\n</code></pre>"},{"location":"kubernetes/Implementing-IP-Whitelisting-for-Load-Balancers-with-Security-Groups-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Compare with articles:</p> <p>Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on 3Engines Cloud</p> <p>Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on 3Engines Cloud</p>"},{"location":"kubernetes/Install-GitLab-on-3Engines-Cloud-Kubernetes.html.html","title":"Install GitLab on 3Engines Cloud Kubernetes\ud83d\udd17","text":"<p>Source control is essential for building professional software. Git has become synonym of a modern source control system and GitLab is one of most popular tools based on Git.</p> <p>GitLab can be deployed as your local instance to ensure privacy of the stored artifacts. It is also the tool of choice for its rich automation capabilities.</p> <p>In this article, we will install GitLab on a Kubernetes cluster in 3Engines Cloud cloud.</p>"},{"location":"kubernetes/Install-GitLab-on-3Engines-Cloud-Kubernetes.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Create a Floating IP and associate the A record in DNS</li> <li>Apply preliminary configuration</li> <li>Install GitLab Helm chart</li> <li>Verify the installation</li> </ul>"},{"location":"kubernetes/Install-GitLab-on-3Engines-Cloud-Kubernetes.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Understand Helm deployments</p> <p>To install GitLab on Kubernetes cluster, we will use the appropriate Helm chart. The following article explains the procedure:</p> <p>Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</p> <p>No. 3 Kubernetes cluster without ingress controller already installed</p> <p>The Helm chart for installation of GitHub client will install its own ingress controller, so for the sake of following this article, you should</p> <ul> <li>either use a cluster that does not have one such ingress controller already installed, or</li> <li>create a new cluster without activating option Ingress Controller in window Network. That option should remain like this:</li> </ul> <p></p> <p>General explanation of how to create a Kubernetes cluster is here:</p> <p>How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p> <p>Be sure to use cluster template for at least version 1.25, like this:</p> <p></p> <p>No. 4 Have your own domain and be able to manage it</p> <p>You will be able to manage the records of a domain associated with your gitlab instance at your domain registrar. Alternatively OpenStack on 3Engines Cloud hosting lets you manage DNS as a service:</p> <p>DNS as a Service on 3Engines Cloud Hosting</p> <p>No. 5 Proof of concept vs. production ready version of GitLab client</p> <p>In Step 3 below, you will create file my-values-gitlab.yaml to define the default configuration of the GitLab client. The values chosen there will provide for a solid quick-start, perhaps in the \u201cproof of concept\u201d phase of development. To customize for production, this reference will come handy: https://gitlab.com/gitlab-org/charts/gitlab/-/blob/v7.11.1/values.yaml?ref_type=tags</p>"},{"location":"kubernetes/Install-GitLab-on-3Engines-Cloud-Kubernetes.html.html#step-1-create-a-floating-ip-and-associate-the-a-record-in-dns","title":"Step 1 Create a Floating IP and associate the A record in DNS\ud83d\udd17","text":"<p>Our GitLab client will run web application (GUI) exposed as a Kubernetes service. We will use GitLab\u2019s Helm chart, which will, as part of GitLab\u2019s installation,</p> <ul> <li>deploy an ingress (controller and resource) to establish service routing and</li> <li>enable its HTTPS encryption (using CertManager).</li> </ul> <p>We will first create a Floating IP (FIP) using Horizon GUI. This FIP will be later associated with the ingress controller. To proceed go to Network tab, then Floating IPs and click on Allocate IP to project button. Fill in a brief description and click Allocate IP.</p> <p></p> <p>After closing the form, your new floating IP will appear on the list and let us say that for the sake of this article, its value is 64.225.134.173. The next step is to create an A record that will associate the subdomain gitlab. with this IP address. In our case, it might look like this if you are using DNS as a Service under OpenStack Horizon UI on your 3Engines Cloud cloud: <p></p>"},{"location":"kubernetes/Install-GitLab-on-3Engines-Cloud-Kubernetes.html.html#step-2-apply-preliminary-configuration","title":"Step 2 Apply preliminary configuration\ud83d\udd17","text":"<p>A condition to ensure compatibility with Kubernetes setup on 3Engines Cloud clouds is to enable the Service Accounts provisioned by GitLab Helm chart to have sufficient access to reading scaling metrics. This can be done by creating an appropriate rolebinding.</p> <p>First, create a namespace gitlab where we will deploy the Helm chart:</p> <pre><code>kubectl create ns gitlab\n</code></pre> <p>Then, create a file gitlab-rolebinding.yaml with the following contents:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: gitlab-rolebinding\n namespace: gitlab\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: Group\n name: system:serviceaccounts\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: system:metrics-server-aggregated-reader\n</code></pre> <p>This adds the rolebinding of the namespace with the appropriate metrics reading cluster role. Apply with:</p> <pre><code>kubectl apply -f gitlab-rolebinding.yaml\n</code></pre>"},{"location":"kubernetes/Install-GitLab-on-3Engines-Cloud-Kubernetes.html.html#step-3-install-gitlab-helm-chart","title":"Step 3 Install GitLab Helm chart\ud83d\udd17","text":"<p>Now let\u2019s download GitLab\u2019s Helm repository with the following two commands:</p> <pre><code>helm repo add gitlab https://charts.gitlab.io/\nhelm repo update\n</code></pre> <p>Next, let\u2019s prepare a configuration file my-values-gitlab.yaml to contain our specific configuration settings. They will override the default values.yaml configuration.</p> <p>my-values-gitlab.yaml</p> <pre><code>global:\n edition: ce\n hosts:\n domain: mysampledomain.info\n externalIP: 64.225.134.173\ncertmanager-issuer:\n email: [email\u00a0protected]\n</code></pre> <p>Here is a brief explanation of the concrete settings in this piece of code:</p> global.edition ce \u2013 we are using the free, community edition of GitLab. global.hosts.domain Use your own domain instead of mysampledomain.info. global.hosts.externalIP Instead of 64.225.134.173 place the floating IP of the ingress controller that was created in Step 1. global.certmanager-issuer.email Instead of XYZ@XXYYZZ.com, provide your real email address. It will be stated on our GitLab client\u2019s HTTPS certificates. <p>Once all the above conditions are met, we can install a chart to the gitlab namespace, with the following command:</p> <pre><code>helm install gitlab gitlab/gitlab --values my-values-gitlab.yaml --namespace gitlab --version 7.11.1\n</code></pre> <p>Here is what the output of a successful installation may look like:</p> <p></p> <p>After this step, there will be several Kubernetes resources created.</p> <p></p>"},{"location":"kubernetes/Install-GitLab-on-3Engines-Cloud-Kubernetes.html.html#step-4-verify-the-installation","title":"Step 4 Verify the installation\ud83d\udd17","text":"<p>After a short while, when all the pods are up, we can access Gitlab\u2019s service entering the address: gitlab.: <p></p> <p>In order to log in to GitLab with your initial user, use root as username and extract the password with the following command:</p> <pre><code>kubectl get secret gitlab-gitlab-initial-root-password -n gitlab -ojsonpath='{.data.password}' | base64 --decode ; echo\n</code></pre> <p>This takes us to the following screen. From there we can utilize various features of GitLab:</p> <p></p>"},{"location":"kubernetes/Install-GitLab-on-3Engines-Cloud-Kubernetes.html.html#errors-during-the-installation","title":"Errors during the installation\ud83d\udd17","text":"<p>In case you encounter errors during installation, which you cannot recover, it might be worth to start with fresh installation. Here is the command to delete the chart:</p> <pre><code>helm uninstall gitlab -n gitlab\n</code></pre> <p>After that, you can restart the procedure from Step 2.</p>"},{"location":"kubernetes/Install-GitLab-on-3Engines-Cloud-Kubernetes.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You now have a local instance of GitLab at your disposal. As next steps you could:</p> <ul> <li>Make the installation more robust and secure e.g. by setting up GitLab\u2019s storage outside of the cluster</li> <li>Configure custom runners</li> <li>Set up additional users, or federate authentication to external identity provider</li> </ul> <p>These steps are not in scope of this article, refer to GitLab\u2019s documentation for further guidelines.</p>"},{"location":"kubernetes/Install-and-run-Argo-Workflows-on-3Engines-Cloud-Magnum-Kubernetes.html.html","title":"Install and run Argo Workflows on 3Engines Cloud Magnum Kubernetes\ud83d\udd17","text":"<p>Argo Workflows enable running complex job workflows on Kubernetes. It can</p> <ul> <li>provide custom logic for managing dependencies between jobs,</li> <li>manage situations where certain steps of the workflow fail,</li> <li>run jobs in parallel to crunch numbers for data processing or machine learning tasks,</li> <li>run CI/CD pipelines,</li> <li>create workflows with directed acyclic graphs (DAG) etc.</li> </ul> <p>Argo applies a microservice-oriented, container-native approach, where each step of a workflow runs as a container.</p>"},{"location":"kubernetes/Install-and-run-Argo-Workflows-on-3Engines-Cloud-Magnum-Kubernetes.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Authenticate to the cluster</li> <li>Apply preliminary configuration to PodSecurityPolicy</li> <li>Install Argo Workflows to the cluster</li> <li>Run Argo Workflows from the cloud</li> <li>Run Argo Workflows locally</li> <li>Run sample workflow with two tasks</li> </ul>"},{"location":"kubernetes/Install-and-run-Argo-Workflows-on-3Engines-Cloud-Magnum-Kubernetes.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"No. 1 Account You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com. No. 2 kubectl pointed to the Kubernetes cluster If you are creating a new cluster, for the purposes of this article, call it argo-cluster. See How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum"},{"location":"kubernetes/Install-and-run-Argo-Workflows-on-3Engines-Cloud-Magnum-Kubernetes.html.html#authenticate-to-the-cluster","title":"Authenticate to the cluster\ud83d\udd17","text":"<p>Let us authenticate to argo-cluster. Run from your local machine the following command to create a config file in the present working directory:</p> <pre><code>openstack coe cluster config argo-cluster\n</code></pre> <p>This will output the command to set the KUBECONFIG env. variable pointing to the location of your cluster e.g.</p> <pre><code>export KUBECONFIG=/home/eouser/config\n</code></pre> <p>Run this command.</p>"},{"location":"kubernetes/Install-and-run-Argo-Workflows-on-3Engines-Cloud-Magnum-Kubernetes.html.html#apply-preliminary-configuration","title":"Apply preliminary configuration\ud83d\udd17","text":"<p>OpenStack Magnum by default applies certain security restrictions for pods running on the cluster, in line with \u201cleast privileges\u201d practice. Argo Workflows will require some additional privileges in order to run correctly.</p> <p>First create a dedicated namespace for Argo Workflows artifacts:</p> <pre><code>kubectl create namespace argo\n</code></pre> <p>The next step is to create a RoleBinding that will add a magnum:podsecuritypolicy:privileged ClusterRole. Create a file argo-rolebinding.yaml with the following contents:</p> <p>argo-rolebinding.yaml</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: argo-rolebinding\n namespace: argo\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: Group\n name: system:serviceaccounts\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: magnum:podsecuritypolicy:privileged\n</code></pre> <p>and apply with:</p> <pre><code>kubectl apply -f argo-rolebinding.yaml\n</code></pre>"},{"location":"kubernetes/Install-and-run-Argo-Workflows-on-3Engines-Cloud-Magnum-Kubernetes.html.html#install-argo-workflows","title":"Install Argo Workflows\ud83d\udd17","text":"<p>In order to deploy Argo on the cluster, run the following command:</p> <pre><code>kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.4.4/install.yaml\n</code></pre> <p>There is also an Argo CLI available for running jobs from command line. Installing it is outside of scope of this article.</p>"},{"location":"kubernetes/Install-and-run-Argo-Workflows-on-3Engines-Cloud-Magnum-Kubernetes.html.html#run-argo-workflows-from-the-cloud","title":"Run Argo Workflows from the cloud\ud83d\udd17","text":"<p>Normally, you would need to authenticate to the server via a UI login. Here, we are going to switch authentication mode by applying the following patch to the deployment. (For production, you might need to incorporate a proper authentication mechanism.) Submit the following command:</p> <pre><code>kubectl patch deployment \\\n argo-server \\\n --namespace argo \\\n --type='json' \\\n -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/args\", \"value\": [\n \"server\",\n \"--auth-mode=server\"\n]}]'\n</code></pre> <p>Argo service by default gets exposed as a Kubernetes service of ClusterIp type, which can be verified by typing the following command:</p> <pre><code>kubectl get services -n argo\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nargo-server ClusterIP 10.254.132.118 &lt;none&gt; 2746:31294/TCP 1d\n</code></pre> <p>In order to expose this service to the Internet, convert type ClusterIP to LoadBalancer by patching the service with the following command:</p> <pre><code>kubectl -n argo patch service argo-server -p '{\"spec\": {\"type\": \"LoadBalancer\"}}'\n</code></pre> <p>After a couple of minutes a cloud LoadBalancer will be generated and the External IP gets populated:</p> <pre><code>kubectl get services -n argo\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nargo-server LoadBalancer 10.254.132.118 64.225.134.153 2746:31294/TCP 1d\n</code></pre> <p>The IP in our case is 64.225.134.153.</p> <p>Argo is by default served on HTTPS with a self-signed certificate, on port 2746. So, by typing /:2746 you should be able to access the service: <p></p>"},{"location":"kubernetes/Install-and-run-Argo-Workflows-on-3Engines-Cloud-Magnum-Kubernetes.html.html#run-sample-workflow-with-two-tasks","title":"Run sample workflow with two tasks\ud83d\udd17","text":"<p>In order to run a sample workflow, first close the initial pop-ups in the UI. Then go to the top-left icon \u201cWorkflows\u201d and click on it, then you might need to press \u201cContinue\u201d in the following pop-up.</p> <p>The next step is to click \u201cSubmit New Workflow\u201d button in the top left part of the screen, which displays a screen similar to the one below:</p> <p></p> <p>Although you can run the workflow provided by Argo as a start, we provide here an alternative minimal example. In order to run it, create a file, which we can call argo-article.yaml and copy in place of the example YAML manifest:</p> <p>argo-article.yaml</p> <pre><code>apiVersion: argoproj.io/v1alpha1\nkind: Workflow\nmetadata:\n generateName: workflow-\n namespace: argo\nspec:\n entrypoint: my-workflow\n serviceAccountName: argo\n templates:\n - name: my-workflow\n dag:\n tasks:\n - name: downloader\n template: downloader-tmpl\n - name: processor\n template: processor-tmpl\n dependencies: [downloader]\n - name: downloader-tmpl\n script:\n image: python:alpine3.6\n command: [python]\n source: |\n print(\"Files downloaded\")\n - name: processor-tmpl\n script:\n image: python:alpine3.6\n command: [python]\n source: |\n print(\"Files processed\")\n</code></pre> <p>This sample mocks a workflow with 2 tasks/jobs. First the downloader task runs, once it finished the processor task does its part. Some highlights about this workflow definition:</p> <ul> <li>Both tasks run as containers. So for each task, the python:alpine3.6 container is first pulled from DockerHub registry. Then this container does a simple work of printing a text. In a production workflow, rather than using a script, the code with your logic would be pulled of your container registry as a custom Docker image.</li> <li>The order of executing the script is here defined using DAG (Directed Acyclic Graph). This allows for specifying the task dependencies in the dependencies section. In our case the dependency is placed on the Processor, so it will only start after the Downloader finishes. If we skipped the dependencies on the Processor, it would run in parallel with the Downloader.</li> <li>Each task in this sequence runs as a Kubernetes pod. When a task is done the pod completes, which frees the resources on the cluster.</li> </ul> <p>You can run this sample by clicking the \u201c+Create\u201d button. Once the workflow completes you should see an outcome as per below:</p> <p></p> <p>Also, when clicking on each step, on the right side of the screen there is more information displayed. E.g. when clicking on the Processor step, we can see its logs in the bottom right part of the screen.</p> <p>The results show that indeed the message \u201cFiles processed\u201d was printed in the container:</p> <p></p>"},{"location":"kubernetes/Install-and-run-Argo-Workflows-on-3Engines-Cloud-Magnum-Kubernetes.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>For production, consider alternative authentication mechanism and replacing self-signed HTTPS certificates with the ones generated by a Certificate Authority.</p>"},{"location":"kubernetes/Install-and-run-Dask-on-a-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html","title":"Install and run Dask on a Kubernetes cluster in 3Engines Cloud cloud\ud83d\udd17","text":"<p>Dask enables scaling computation tasks either as multiple processes on a single machine, or on Dask clusters that consist of multiple worker machines. Dask provides a scalable alternative to popular Python libraries e.g. Numpy, Pandas or SciKit Learn, but still using a compact and very similar API.</p> <p>Dask scheduler, once presented with a computation task, splits it into smaller tasks that can be executed in parallel on the worker nodes/processes.</p> <p>In this article you will install a Dask cluster on Kubernetes and run Dask worker nodes as Kubernetes pods. As part of the installation, you will get access to a Jupyter instance, where you can run the sample code.</p>"},{"location":"kubernetes/Install-and-run-Dask-on-a-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Install Dask on Kubernetes</li> <li>Access Jupyter and Dask Scheduler dashboard</li> <li>Run a sample computing task</li> <li>Configure Dask cluster on Kubernetes from Python</li> <li>Resolving errors</li> </ul>"},{"location":"kubernetes/Install-and-run-Dask-on-a-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Kubernetes cluster on 3Engines cloud</p> <p>To create Kubernetes cluster on cloud refer to this guide: How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p> <p>No. 3 Access to kubectl command line</p> <p>The instructions for activation of kubectl are provided in: How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p> <p>No. 4 Familiarity with Helm</p> <p>For more information on using Helm and installing apps with Helm on Kubernetes, refer to Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</p> <p>No. 5 Python3 available on your machine</p> <p>Python3 preinstalled on the working machine.</p> <p>No. 6 Basic familiarity with Jupyter and Python scientific libraries</p> <p>We will use Pandas as an example.</p>"},{"location":"kubernetes/Install-and-run-Dask-on-a-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#step-1-install-dask-on-kubernetes","title":"Step 1 Install Dask on Kubernetes\ud83d\udd17","text":"<p>To install Dask as a Helm chart, first download the Dask Helm repository:</p> <pre><code>helm repo add dask https://helm.dask.org/\n</code></pre> <p>Instead of installing the chart out of the box, let us customize the configuration for convenience. To view all possible configurations and their defaults run:</p> <pre><code>helm show dask/dask\n</code></pre> <p>Prepare file dask-values.yaml to override some of the defaults:</p> <p>dask-values.yaml</p> <pre><code>scheduler:\n serviceType: LoadBalancer\njupyter:\n serviceType: LoadBalancer\nworker:\n replicas: 4\n</code></pre> <p>This changes the default service type for Jupyter and Scheduler to LoadBalancer, so that they get exposed publicly. Also, the default number of Dask workers is 3 but is now changed to 4. Each Dask worker pod will get allocated 3GB RAM and 1CPU, we keep it at this default.</p> <p>To deploy the chart, create the namespace dask and install to it:</p> <pre><code>helm install dask dask/dask -n dask --create-namespace -f dask-values.yaml\n</code></pre>"},{"location":"kubernetes/Install-and-run-Dask-on-a-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#step-2-access-jupyter-and-dask-scheduler-dashboard","title":"Step 2 Access Jupyter and Dask Scheduler dashboard\ud83d\udd17","text":"<p>After the installation step, you can access Dask services:</p> <pre><code>kubectl get services -n dask\n</code></pre> <p>There are two services, for Jupyter and Dask Scheduler dashboard. Populating external IPs will take few minutes:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndask-jupyter LoadBalancer 10.254.230.230 64.225.128.91 80:32437/TCP 6m49s\ndask-scheduler LoadBalancer 10.254.41.250 64.225.128.236 8786:31707/TCP,80:31668/TCP 6m49s\n</code></pre> <p>We can paste the external IPs to the browser to view the services. To access Jupyter, you will first need to pass the login screen, the default password is dask. Then you can view the Jupyter instance:</p> <p></p> <p>Similarly, with the Scheduler Dashboard, paste the floating IP to the browser to view it. If you then click on the \u201cWorkers\u201d tab above, you can see that 4 workers are running on our Dask cluster:</p> <p></p>"},{"location":"kubernetes/Install-and-run-Dask-on-a-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#step-3-run-a-sample-computing-task","title":"Step 3 Run a sample computing task\ud83d\udd17","text":"<p>The installed Jupyter instance already contains Dask and other useful Python libraries installed. To run a sample job, first activate the notebook by clicking on icon named NoteBook \u2192 Python3(ipykernel) on the right hand side of the Jupyter instance browser screen.</p> <p>The sample job performs calculation on table (dataframe) of 100k rows, and just one column. Each record will be filled with a random integer from 1 to 100,000 and the task is to calculate the sum of all records.</p> <p>The code will run the same example for Pandas (single process) and Dask (parallelized on our cluster) and we will be able to inspect the results.</p> <p>Copy the following code and paste to the cell in Jupyter notebook:</p> <pre><code>import dask.dataframe as dd\nimport pandas as pd\nimport numpy as np\nimport time\n\ndata = {'A': np.random.randint(1, 100_000_000, 100_000_000)}\ndf_pandas = pd.DataFrame(data)\ndf_dask = dd.from_pandas(df_pandas, npartitions=4)\n\n# Pandas\nstart_time_pandas = time.time()\nresult_pandas = df_pandas['A'].sum()\nend_time_pandas = time.time()\nprint(f\"Result Pandas: {result_pandas}\")\nprint(f\"Computation time Pandas: {end_time_pandas - start_time_pandas:.2f} seconds.\")\n\n# Dask\nstart_time_dask = time.time()\nresult_dask = df_dask['A'].sum().compute()\nend_time_dask = time.time()\nprint(f\"Result Dask: {result_dask}\")\nprint(f\"Computation time Dask: {end_time_dask - start_time_dask:.2f} seconds.\")\n</code></pre> <p>Hit play or use option Run from the main menu to execute the code. After a few seconds, the result will appear below the cell with code.</p> <p>Some of the results we could observe for this example:</p> <pre><code>Result Pandas: 4999822570722943\nComputation time Pandas: 0.15 seconds.\nResult Dask: 4999822570722943\nComputation time Dask: 0.07 seconds.\n</code></pre> <p>Note these results are not deterministic and simple Pandas could also perform better case by case. The overhead to distribute and collect results from Dask workers needs to be also taken into account. Further tuning the performance of Dask is beyond the scope of this article.</p>"},{"location":"kubernetes/Install-and-run-Dask-on-a-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#step-4-configure-dask-cluster-on-kubernetes-from-python","title":"Step 4 Configure Dask cluster on Kubernetes from Python\ud83d\udd17","text":"<p>For managing the Dask cluster on Kubernetes we can use a dedicated Python library dask-kubernetes. Using this library, we can reconfigure certain parameters of our Dask cluster.</p> <p>One way to run dask-kubernetes would be from the Jupyter instance but then we would have to provide reference to kubeconfig of our cluster. Instead, we install dask-kubernetes in our local environment, with the following command:</p> <pre><code>pip install dask-kubernetes\n</code></pre> <p>Once this is done, we can manage the Dask cluster from Python. As an example, let us upscale it to 5 Dask nodes. Use nano to create file scale-cluster.py:</p> <pre><code>nano scale-cluster.py\n</code></pre> <p>then insert the following commands:</p> <p>scale-cluster.py</p> <pre><code>from dask_kubernetes import HelmCluster\n\ncluster = HelmCluster(release_name=\"dask\", namespace=\"dask\")\ncluster.scale(5)\n</code></pre> <p>Apply with:</p> <pre><code>python3 scale-cluster.py\n</code></pre> <p>Using the command</p> <pre><code>kubectl get pods -n dask\n</code></pre> <p>you can see that the number of workers now is 5:</p> <p></p> <p>Or, you can see the current number of worker nodes in the Dask Scheduler dashboard (refresh the screen):</p> <p></p> <p>Note that the functionalities of dask-kubernetes should be possible to achieve using just Kubernetes API directly, the choice will depend on your personal preference.</p>"},{"location":"kubernetes/Install-and-run-Dask-on-a-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#resolving-errors","title":"Resolving errors\ud83d\udd17","text":"<p>When running command</p> <pre><code>python3 scale-cluster.py\n</code></pre> <p>on WSL version 1, error messages such as these may appear:</p> <p></p> <p>The code will work properly, that is, it will increase the number of workers to 5, as required. The error should not appear on WSL version 2 and other Ubuntu distros.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html","title":"Install and run NooBaa on Kubernetes cluster in single- and multicloud-environment on 3Engines Cloud\ud83d\udd17","text":"<p>NooBaa enables creating an abstracted S3 backend on Kubernetes. Such backend can be connected to multiple S3 backing stores e.g. in a multi-cloud setup, allowing for storage expandability or High Availability among other beneficial features.</p> <p>In this article you will learn the basics of using NooBaa</p> <ul> <li>how to install it on Kubernetes cluster</li> <li>how to create a NooBaa bucket backed by S3 object storage in the 3Engines Cloud cloud</li> <li>how to create a NooBaa bucket mirroring data on two different clouds</li> </ul>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Install NooBaa in local environment</li> <li>Apply preliminary configuration</li> <li>Install NooBaa on the Kubernetes cluster</li> <li>Create a NooBaa backing store</li> <li>Create a Bucket Class</li> <li>Create an ObjectBucketClaim</li> <li>Connect to NooBaa bucket from S3cmd</li> <li>Testing access to the bucket</li> <li>Create mirroring on clouds WAW3-1 and WAW3-2</li> </ul>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Access to Kubernetes cluster on WAW3-1 cloud</p> <p>A cluster on WAW3-1 cloud, where we will run our NooBaa installation - follow guidelines in this article How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum.</p> <p>No. 3 Familiarity with using Object Storage on 3Engines clouds</p> <p>More information in How to use Object Storage on 3Engines Cloud</p> <p>Traditional OpenStack term for imported or downloaded files is Containers in main menu option Object Store. We will use the term \u201cbucket\u201d for object storage containers, to differentiate vs. container term in Docker/Kubernetes sense.</p> <p>No. 4 kubectl operational</p> <p>kubectl CLI tool installed and pointing to your cluster via KUBECONFIG env. variable - more information in How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum.</p> <p>No. 5 Access to private S3 keys in WAW3-1 cloud</p> <p>You may also use access to OpenStack CLI to generate and read the private S3 keys - How to generate and manage EC2 credentials on 3Engines Cloud.</p> <p>No. 6 Familiarity with s3cmd for accessing object storage</p> <p>For more info on s3cmd, see How to access private object storage using S3cmd or boto3 on 3Engines Cloud.</p> <p>No. 7 Access to WAW3-2 cloud</p> <p>To mirror data on WAW3-1 and WAW3-2, you will need access to those two clouds.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#install-noobaa-in-local-environment","title":"Install NooBaa in local environment\ud83d\udd17","text":"<p>The first step to work with NooBaa is to install it on our local system. We will download the installer, make it executable and move it to the system path:</p> <pre><code>curl -LO https://github.com/noobaa/noobaa-operator/releases/download/v5.11.0/noobaa-linux-v5.11.0\nchmod +x noobaa-linux-v5.11.0\nsudo mv noobaa-linux-v5.11.0 /usr/local/bin/noobaa\n</code></pre> <p>Enter the password for root user, if required.</p> <p>After this sequence of steps, it should be possible to run a test command</p> <pre><code>noobaa help\n</code></pre> <p>This will result in an output similar to the below:</p> <p></p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#apply-preliminary-configuration","title":"Apply preliminary configuration\ud83d\udd17","text":"<p>We will need to apply additional configuration on a Magnum cluster to avoid PodSecurityPolicy exception. For a refresher, see article Installing JupyterHub on Magnum Kubernetes Cluster in 3Engines Cloud Cloud.</p> <p>Let\u2019s start by creating a dedicated namespace for Noobaa artifacts:</p> <pre><code>kubectl create namespace noobaa\n</code></pre> <p>Then create a file noobaa-rolebinding.yaml with the following contents:</p> <p>noobaa-rolebinding.yaml</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: noobaa-rolebinding\n namespace: noobaa\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: Group\n name: system:serviceaccounts\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: magnum:podsecuritypolicy:privileged\n</code></pre> <p>and apply with:</p> <pre><code>kubectl apply -f noobaa-rolebinding.yaml\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#install-noobaa-on-the-kubernetes-cluster","title":"Install NooBaa on the Kubernetes cluster\ud83d\udd17","text":"<p>We already have NooBaa available in our local environment, but we still need to install NooBaa on our Kubernetes cluster. NooBaa will use the context of the KUBECONFIG by kubectl (as activated in Prerequisite No. 4), so install NooBaa in the dedicated namespace:</p> <pre><code>noobaa install -n noobaa\n</code></pre> <p>After a few minutes, this will install NooBaa and provide additional information about the setup. See the status of NooBaa with command</p> <pre><code>noobaa status -n noobaa\n</code></pre> <p>It outputs several useful insights about the NooBaa installation, with the \u201ckey facts\u201d available towards the end of this status:</p> <ul> <li>NooBaa created a default backing store called noobaa-default-backing-store, backed by a block volume created in OpenStack.</li> <li>S3 credentials are provided to access the bucket created with the default backing store. Such volume-based backing store has its use e.g. for utilizing the S3 access method to our block storage.</li> </ul> <p>For the purpose of this article, we will not use the default backing store, but rather learn to create a new backing store based on cloud S3 object storage. Such setup can be then easily extended so that we can end up with separate backing stores for different clouds. In the second part of this article you will create one store on WAW3-1 cloud, another one on WAW3-2 cloud and they will be available through one abstracted S3 bucket in NooBaa.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#create-a-noobaa-backing-store","title":"Create a NooBaa backing store\ud83d\udd17","text":""},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-1-create-object-storage-bucket-on-waw3-1","title":"Step 1. Create object storage bucket on WAW3-1\ud83d\udd17","text":"<p>Now create an object storage bucket on WAW3-1 cloud:</p> <ul> <li>switch to Horizon,</li> <li>use commands Object Store \u2013&gt; Containers \u2013&gt; + Container to create a new object bucket.</li> </ul> <p></p> <p>Buckets on WAW3-1 cloud need to have unique names. In our case, we use bucket name noobaademo-waw3-1 which we will use throughout the article.</p> <p>Note</p> <p>You need to create a bucket with a different name and use this generated name to follow along.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-2-set-up-ec2-credentials","title":"Step 2. Set up EC2 credentials\ud83d\udd17","text":"<p>If you have properly set up the EC2 (S3) keys for your WAW3-1 object storage, take note of them with the following command:</p> <pre><code>openstack ec2 credentials list\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-3-create-a-new-noobaa-backing-store","title":"Step 3. Create a new NooBaa backing store\ud83d\udd17","text":"<p>With the above in place, we can create a new NooBaa backing store called custom-bs by running the command below. Make sure to replace the access-key XXXXXX and the secret-key YYYYYYY with your own EC2 keys and the bucket with your own bucket name:</p> <pre><code>noobaa -n noobaa backingstore create s3-compatible custom-bs --endpoint https://s3.waw3-1.3Engines.com --signature-version v4 --access-key XXXXXX \\\n--secret-key YYYYYYY --target-bucket noobaademo-waw3-1\n</code></pre> <p>Note that the credentials get stored as a Kubernetes secret in the namespace. You can verify that the backing store and the secret got created by running the following commands:</p> <pre><code>kubectl get backingstore -n noobaa\nkubectl get secret -n noobaa\n</code></pre> <p>The naming of the artifacts will follow the name of the backing store in case there are already more such resources available in the namespace.</p> <p>Also, when viewing the bucket in Horizon (backing store), we can see NooBaa populated it\u2019s folder structure:</p> <p></p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-4-create-a-bucket-class","title":"Step 4. Create a Bucket Class\ud83d\udd17","text":"<p>When we have the backing store, the next step is to create a BucketClass (BC). Such BucketClass serves as a blueprint for NooBaa buckets: it defines</p> <ul> <li>which BackingStore(s) these buckets will use, and</li> <li>which placement strategy to use in case of multiple bucket stores.</li> </ul> <p>The placement strategy could be Mirror or Spread. There is also support for using multiple tiers, where data is by default pushed to the first tier, and when this is full, to the next one.</p> <p>In order to create a BucketClass, prepare the following file custom-bc.yaml:</p> <p>custom-bc.yaml</p> <pre><code>apiVersion: noobaa.io/v1alpha1\nkind: BucketClass\nmetadata:\n labels:\n app: noobaa\n name: custom-bc\n namespace: noobaa\nspec:\n placementPolicy:\n tiers:\n - backingStores:\n - custom-bs\n placement: Spread\n</code></pre> <p>Then apply with:</p> <pre><code>kubectl apply -f custom-bc.yaml\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-5-create-an-objectbucketclaim","title":"Step 5. Create an ObjectBucketClaim\ud83d\udd17","text":"<p>As the last step, we create an ObjectBucketClaim. This bucket claim utilizes the noobaa.noobaa.io storage class which got deployed with NooBaa, and references the custom-bc bucket class created in the previous step. Create a file called custom-obc.yaml:</p> <p>custom-obc.yaml</p> <pre><code>apiVersion: objectbucket.io/v1alpha1\nkind: ObjectBucketClaim\nmetadata:\n name: custom-obc\n namespace: noobaa\nspec:\n generateBucketName: my-bucket\n storageClassName: noobaa.noobaa.io\n additionalConfig:\n bucketclass: custom-bc\n</code></pre> <p>Then apply with:</p> <pre><code>kubectl apply -f custom-obc.yaml\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-6-obtain-name-of-the-noobaa-bucket","title":"Step 6. Obtain name of the NooBaa bucket\ud83d\udd17","text":"<p>As a result, besides the ObjectBucket claim resource, also a configmap and a secret with the same name custom-obc got created in NooBaa. Let\u2019s view the configmap with:</p> <pre><code>kubectl get configmap custom-obc -n noobaa -o yaml\n</code></pre> <p>The result is similar to the following:</p> <pre><code>apiVersion: v1\ndata:\n BUCKET_HOST: s3.noobaa.svc\n BUCKET_NAME: my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf\n BUCKET_PORT: \"443\"\n BUCKET_REGION: \"\"\n BUCKET_SUBREGION: \"\"\nkind: ConfigMap\nmetadata:\n ...\n</code></pre> <p>We can see the name of the NooBaa bucket my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf, which is backing up our \u201cphysical\u201d WAW3-1 bucket. Store this name for later use in this article.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-7-obtain-secret-for-the-noobaa-bucket","title":"Step 7. Obtain secret for the NooBaa bucket\ud83d\udd17","text":"<p>The secret is also relevant for us as we need to extract the S3 keys to the NooBaa bucket. The access and secret key are base64 encoded in the secret, we can retrieve them decoded with the following commands:</p> <pre><code>kubectl get secret custom-obc -n noobaa -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode\nkubectl get secret custom-obc -n noobaa -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode\n</code></pre> <p>Take note of access and secret keys, as we will use them in the next step.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-8-connect-to-noobaa-bucket-from-s3cmd","title":"Step 8. Connect to NooBaa bucket from S3cmd\ud83d\udd17","text":"<p>Noobaa created a few services when it got deployed, which we can verify with the command below:</p> <pre><code>kubectl get services -n noobaa\n</code></pre> <p>The output should be similar to the one below:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nnoobaa-db-pg ClusterIP 10.254.158.217 &lt;none&gt; 5432/TCP 3h24m\nnoobaa-mgmt LoadBalancer 10.254.145.9 64.225.135.152 80:31841/TCP,443:31736/TCP,8445:32063/TCP,8446:32100/TCP 3h24m\ns3 LoadBalancer 10.254.244.226 64.225.133.81 80:30948/TCP,443:31609/TCP,8444:30079/TCP,7004:31604/TCP 3h24m\nsts LoadBalancer 10.254.23.154 64.225.135.92 443:31374/TCP 3h24m\n</code></pre> <p>The \u201cs3\u201d service provides the endpoint that can be used to access Nooba storage (backed by the actual storage in WAW3-1). In our case, this endpoint URL is 64.225.133.81. Replace it with the value you get from the above command, when working through this article.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-9-configure-s3cmd-to-access-noobaa","title":"Step 9. Configure S3cmd to access NooBaa\ud83d\udd17","text":"<p>Now that we have both the endpoint and the keys, we can configure s3cmd to access the bucket created by NooBaa. Create a configuration file noobaa.s3cfg with the following contents:</p> <pre><code>check_ssl_certificate = False\ncheck_ssl_hostname = False\naccess_key = XXXXXX\nsecret_key = YYYYYY\nhost_base = 64.225.133.81\nhost_bucket = 64.225.133.81\nuse_https = True\nverbosity = WARNING\nsignature_v2 = False\n</code></pre> <p>Then from the same location apply with:</p> <pre><code>s3cmd --configure -c noobaa.s3cfg\n</code></pre> <p>If the s3cmd is not installed on your system, see Prerequisite No. 6.</p> <p>The s3cmd command will let you press Enter to confirm each value from config file and let you change it on the fly, if different from default.</p> <p>Omitting those questions in the output below, the result should be similar to the following:</p> <pre><code>...\nSuccess. Your access key and secret key worked fine :-)\n\nNow verifying that encryption works...\nNot configured. Never mind.\n\nSave settings? [y/N] y\nConfiguration saved to 'noobaa.s3cfg'\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-10-testing-access-to-the-bucket","title":"Step 10. Testing access to the bucket\ud83d\udd17","text":"<p>We can upload a test file to NooBaa. In our case, we upload a simple text file xyz.txt with text content \u201cxyz\u201d, using the following command:</p> <pre><code>s3cmd put xyz.txt s3://my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf -c noobaa.s3cfg\n</code></pre> <p>The file gets uploaded correctly:</p> <pre><code>upload: 'xyz.txt' -&gt; 's3://my-bucket-7941ba4a-f57b-400a-b870-b337ec5284cf/xyz.txt' [1 of 1]\n 4 of 4 100% in 0s 5.67 B/s done\n</code></pre> <p>We can also see in Horizon that a few new folders and files were added to NooBaa. However, we will not see the xyz.txt file directly there, because NooBaa applies its own fragmentation techniques on the data.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#connect-noobaa-in-a-multi-cloud-setup","title":"Connect NooBaa in a multi-cloud setup\ud83d\udd17","text":"<p>NooBaa can be used to create an abstracted S3 endpoint, connected to two or more cloud S3 endpoints. This can be helpful in scenarios of e.g. replicating the same data in multiple clouds or combining the storage of multiple clouds.</p> <p>In this section of the article we demonstrate the \u201cmirroring scenario\u201d. We create an S3 NooBaa endpoint replicating (mirroring) data between WAW3-1 cloud and WAW3-2 cloud.</p> <p>Note</p> <p>To illustrate the process, we are going create a new set of resources, new S3 buckets and introduce new naming of the entities. The steps 1 to 9 from above are almost identical so we shall denote them as Step 1 Multi-cloud, Step 2 Multi-cloud and so on.</p> <p>To proceed, first create two additional buckets from the Horizon interface. Replace the further commands and file contents in this section to reflect these bucket names.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-1-multi-cloud-create-bucket-on-waw3-1","title":"Step 1 Multi-cloud. Create bucket on WAW3-1\ud83d\udd17","text":"<p>Go to WAW3-1 Horizon interface and create a bucket we call noobaamirror-waw3-1 (supply your own bucket name here and adhere to it in the rest of the article). It will be the available on endpoint https://s3.waw3-1.3Engines.com.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-1-multi-cloud-create-bucket-on-waw3-2","title":"Step 1 Multi-cloud. Create bucket on WAW3-2\ud83d\udd17","text":"<p>Next, go to WAW3-2 Horizon interface and create a bucket we call noobaamirror-waw3-2 (again, supply your own bucket name here and adhere to it in the rest of the article). It will be available on endpoint https://s3.waw3-2.3Engines.com</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-2-multi-cloud-set-up-ec2-credentials","title":"Step 2 Multi-cloud. Set up EC2 credentials\ud83d\udd17","text":"<p>Use the existing pair of EC2 credentials or first create a new pair and then use them in the next step.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-3-multi-cloud-create-backing-store-mirror-bs1-on-waw3-1","title":"Step 3 Multi-cloud. Create backing store mirror-bs1 on WAW3-1\ud83d\udd17","text":"<p>Apply the following command to create mirror-bs1 backing store (change names of: bucket name, S3 access key, S3 secret key to your own):</p> <pre><code>noobaa -n noobaa backingstore create s3-compatible mirror-bs1 --endpoint https://s3.waw3-1.3Engines.com --signature-version v4 --access-key XXXXXX --secret-key YYYYYY --target-bucket noobaamirror-waw3-1\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-3-multi-cloud-create-backing-store-mirror-bs2-on-waw3-2","title":"Step 3 Multi-cloud. Create backing store mirror-bs2 on WAW3-2\ud83d\udd17","text":"<p>Apply the following command to create mirror-bs2 backing store (change names of: bucket name, S3 access key, S3 secret key to your own):</p> <pre><code>noobaa -n noobaa backingstore create s3-compatible mirror-bs2 --endpoint https://s3.waw3-2.3Engines.com --signature-version v4 --access-key XXXXXX --secret-key YYYYYY --target-bucket noobaamirror-waw3-2\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-4-multi-cloud-create-a-bucket-class","title":"Step 4 Multi-cloud. Create a Bucket Class\ud83d\udd17","text":"<p>To create a BucketClass called bc-mirror, create a file called bc-mirror.yaml with the following contents:</p> <p>bc-mirror.yaml</p> <pre><code>apiVersion: noobaa.io/v1alpha1\nkind: BucketClass\nmetadata:\n labels:\n app: noobaa\n name: bc-mirror\n namespace: noobaa\nspec:\n placementPolicy:\n tiers:\n - backingStores:\n - mirror-bs1\n - mirror-bs2\n placement: Mirror\n</code></pre> <p>and apply with:</p> <pre><code>kubectl apply -f bc-mirror.yaml\n</code></pre> <p>Note</p> <p>The mirroring is implemented by listing two backing stores, mirror-bs1 and mirror-bs1, under the tiers option.</p>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-5-multi-cloud-create-an-objectbucketclaim","title":"Step 5 Multi-cloud. Create an ObjectBucketClaim\ud83d\udd17","text":"<p>Again, create file obc-mirror.yaml for ObjectBucketClaim obc-mirror:</p> <p>obc-mirror.yaml</p> <pre><code>apiVersion: objectbucket.io/v1alpha1\nkind: ObjectBucketClaim\nmetadata:\n name: obc-mirror\n namespace: noobaa\nspec:\n generateBucketName: my-bucket\n storageClassName: noobaa.noobaa.io\n additionalConfig:\n bucketclass: bc-mirror\n</code></pre> <p>and apply with:</p> <pre><code>kubectl apply -f obc-mirror\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-6-multi-cloud-obtain-name-of-the-noobaa-bucket","title":"Step 6 Multi-cloud. Obtain name of the NooBaa bucket\ud83d\udd17","text":"<p>Extract bucket name from the configmap:</p> <pre><code>kubectl get configmap obc-mirror -n noobaa -o yaml\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-7-multi-cloud-obtain-secret-for-the-noobaa-bucket","title":"Step 7 Multi-cloud. Obtain secret for the NooBaa bucket\ud83d\udd17","text":"<p>Extract S3 keys from the created secret:</p> <pre><code>kubectl get secret obc-mirror -n noobaa -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode\nkubectl get secret obc-mirror -n noobaa -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-8-multi-cloud-connect-to-noobaa-bucket-from-s3cmd","title":"Step 8 Multi-cloud. Connect to NooBaa bucket from S3cmd\ud83d\udd17","text":"<p>Create additional config file for s3cmd e.g. noobaa-mirror.s3cfg and update the access key, the secret key and the bucket name to the ones retrieved above:</p> <pre><code>s3cmd --configure -c noobaa-mirror.s3cfg\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-9-multi-cloud-configure-s3cmd-to-access-noobaa","title":"Step 9 Multi-cloud. Configure S3cmd to access NooBaa\ud83d\udd17","text":"<p>To test, upload the xyz.txt file, which behind the scenes uploads a copy to both clouds. Be sure to change the bucket name my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff to the one retrieved from the configmap:</p> <pre><code>s3cmd put xyz.txt s3://my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff -c noobaa-mirror.s3cfg\n</code></pre>"},{"location":"kubernetes/Install-and-run-NooBaa-on-Kubernetes-cluster-in-single-and-multicloud-environment-on-3Engines-Cloud.html.html#step-10-multi-cloud-testing-access-to-the-bucket","title":"Step 10 Multi-cloud. Testing access to the bucket\ud83d\udd17","text":"<p>To verify, delete the \u201cphysical\u201d bucket on one of the clouds (e.g. from WAW3-1) from the Horizon interface. With the s3cmd command below you can see that NooBaa will still hold the copy from WAW3-2 cloud:</p> <pre><code>s3cmd ls s3://my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff -c noobaa-mirror.s3cfg\n2023-07-21 09:47 4 s3://my-bucket-aa6b8a23-4a77-4306-ae36-0248fc1c44ff/xyz.txt\n</code></pre>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html","title":"Installing HashiCorp Vault on 3Engines Cloud Magnum\ud83d\udd17","text":"<p>In Kubernetes, a Secret is an object that contains passwords, tokens, keys or any other small pieces of data. Using Secrets ensures that the probability of exposing confidential data while creating, running and editing Pods is much smaller. The main problem is that Secrets are stored unencrypted in etcd so anyone with</p> <ul> <li>API access, as well as anyone who</li> <li>can create a Pod or create a Deployment in a namespace</li> </ul> <p>can also retrieve or modify a Secret.</p> <p>You can apply a number of strategies to improve the security of the cluster or you can install a specialized solution such as HashiCorp Vault. It offers</p> <ul> <li>secure storage of all kinds of secrets \u2013 passwords, TLS certificates, database credentials, API encryption keys and others,</li> <li>encryption of all of the data,</li> <li>dynamic serving of the credentials,</li> <li>granular access policies for users, applications, and services,</li> <li>logging and auditing of data usage,</li> <li>revoking or deleting any key or secret,</li> <li>setting automated secret rotation \u2013 for administrators and users alike.</li> </ul> <p>In this article, we shall install HashiCorp Vault within a Magnum Kubernetes cluster, on 3Engines Cloud cloud.</p>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Install self-signed TLS certificates with CFSSL</li> <li>Generate certificates to enable encryption of traffic with Vault</li> <li>Install Consul storage backend for High Availability</li> <li>Install Vault</li> <li>Sealing and unsealing the Vault</li> <li>Unseal Vault</li> <li>Run Vault UI</li> <li>Return livenessProbe to production value</li> <li>Troubleshooting</li> </ul>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Familiarity with kubectl</p> <p>You should have an appropriate Kubernetes cluster up and running, with kubectl pointing to it How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p> <p>No. 3 Familiarity with deploying Helm charts</p> <p>This article will introduce you to Helm charts on Kubernetes:</p> <p>Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</p>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#step-1-install-cfssl","title":"Step 1 Install CFSSL\ud83d\udd17","text":"<p>To ensure that Vault communication with the cluster is encrypted, we need to provide TLS certificates.</p> <p>We will use the self-signed TLS certificates issued by a private Certificate Authority. To generate them we will use CFSSL utilities: cfssl and cfssljson.</p> <p>cfssl is a CLI utility. cfssljson takes the JSON output from cfssl and writes certificates, keys, and CSR (certificate signing requests).</p> <p>We need to download the binaries of both tools: cfssl and cfssljson from https://github.com/cloudflare/cfssl and make them executable:</p> <pre><code>curl -L https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64 -o cfssl\ncurl -L https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 -o cfssljson\nchmod +x cfssl\nchmod +x cfssljson\n</code></pre> <p>Then we also need to add them to our path:</p> <pre><code>sudo mv cfssl cfssljson /usr/local/bin\n</code></pre>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#step-2-generate-tls-certificates","title":"Step 2 Generate TLS certificates\ud83d\udd17","text":"<p>Before we start, let\u2019s create a dedicated namespace where all Vault-related Kubernetes resources will live:</p> <pre><code>kubectl create namespace vault\n</code></pre> <p>We will need to issue two sets of certificates. The first set will be a root certificate for Certificate Authority. The second will reference the CA certificate and create the actual Vault cert.</p> <p>To create the key request for CA, we will base it on a JSON file ca-csr.json. Create this file in your favorite editor, and if you want to, substitute the certificate details to your own use case:</p> <p>ca-csr.json</p> <pre><code>{\n \"hosts\": [\n \"cluster.local\"\n ],\n \"key\": {\n \"algo\": \"rsa\",\n \"size\": 2048\n },\n \"names\": [\n {\n \"C\": \"Poland\",\n \"L\": \"Warsaw\",\n \"O\": \"MyOrganization\"\n }\n ]\n}\n</code></pre> <p>Then issue the command to generate a self-signed root CA certificate.</p> <pre><code>cfssl gencert -initca ca-csr.json | cfssljson -bare ca\n</code></pre> <p>You should see output similar to the following:</p> <pre><code>2023/01/02 15:27:36 [INFO] generating a new CA key and certificate from CSR\n2023/01/02 15:27:36 [INFO] generate received request\n2023/01/02 15:27:36 [INFO] received CSR\n2023/01/02 15:27:36 [INFO] generating key: rsa-2048\n2023/01/02 15:27:36 [INFO] encoded CSR\n2023/01/02 15:27:36 [INFO] signed certificate with serial number 472447709029717049436439292623827313295747809061\n</code></pre> <p>Also, as a result, three entities are generated:</p> <ul> <li>the private key,</li> <li>the CSR, and the</li> <li>self-signed certificate (ca.pem, ca.csr, ca-key.pem).</li> </ul> <p>The next step is to create Vault certificates, which reference the private CA. To do so, first create a configuration file ca-config.json, to override the default configuration. This is especially useful for changing certificate validity:</p> <p>ca-config.json</p> <pre><code>{\n \"signing\": {\n \"default\": {\n \"expiry\": \"17520h\"\n },\n \"profiles\": {\n \"default\": {\n \"usages\": [\"signing\", \"key encipherment\", \"server auth\", \"client auth\"],\n \"expiry\": \"17520h\"\n }\n }\n }\n}\n</code></pre> <p>Then generate the Vault keys, referencing this file and the CA keys:</p> <pre><code>cfssl gencert \\\n -ca ./ca.pem \\\n -ca-key ./ca-key.pem \\\n -config ca-config.json \\\n -profile default \\\n -hostname=\"vault,vault.vault.svc.cluster.local,localhost,127.0.0.1\" \\\n ca-csr.json | cfssljson -bare vault\n</code></pre> <p>The result will be the following:</p> <pre><code>2023/01/02 16:19:52 [INFO] generate received request\n2023/01/02 16:19:52 [INFO] received CSR\n2023/01/02 16:19:52 [INFO] generating key: rsa-2048\n2023/01/02 16:19:52 [INFO] encoded CSR\n2023/01/02 16:19:52 [INFO] signed certificate with serial number 709743788174272015258726707100830785425213226283\n</code></pre> <p>Also, another three files get created in your working folder: vault.pem, vault.csr, vault-key.pem.</p> <p>The last step is to store the generated keys as Kubernetes TLS secrets on our cluster:</p> <pre><code>kubectl -n vault create secret tls tls-ca --cert ./ca.pem --key ./ca-key.pem -n vault\nkubectl -n vault create secret tls tls-server --cert ./vault.pem --key ./vault-key.pem -n vault\n</code></pre> <p>The naming of those secrets reflects the Vault Helm chart default names.</p>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#step-3-install-consul-helm-chart","title":"Step 3 Install Consul Helm chart\ud83d\udd17","text":"<p>The Consul backend will ensure High Availability of our Vault installation. Consul will live in a namespace that we have already created, vault.</p> <p>Here is an override configuration file for the Consul Helm chart: consul-values.yaml.</p> <p>consul-values.yaml</p> <pre><code>global:\n datacenter: vault-kubernetes-guide\n\nclient:\n enabled: true\n\nserver:\n replicas: 1\n bootstrapExpect: 1\n disruptionBudget:\n maxUnavailable: 0\n</code></pre> <p>Now install the hashicorp repository of Helm charts and verify that vault is in it:</p> <pre><code>helm repo add hashicorp https://helm.releases.hashicorp.com\nhelm search repo hashicorp/vault\n</code></pre> <p>As the last step, install Consul chart:</p> <pre><code>helm install consul hashicorp/consul -f consul-values.yaml -n vault\n</code></pre> <p>This is the report about success of the installation:</p> <pre><code>NAME: consul\nLAST DEPLOYED: Thu Feb 9 18:52:58 2023\nNAMESPACE: vault\nSTATUS: deployed\nREVISION: 1\nNOTES:\nThank you for installing HashiCorp Consul!\n\nYour release is named consul.\n</code></pre> <p>Shortly, several Consul pods will get deployed in the vault namespace. Run the following command to verify it:</p> <pre><code>kubectl get pods -n vault\n</code></pre> <p>Wait until all of the pods are Running and then proceed with the next step.</p>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#step-4-install-vault-helm-chart","title":"Step 4 Install Vault Helm chart\ud83d\udd17","text":"<p>We are now ready to install Vault.</p> <p>First, let\u2019s provide file vault-values.yaml which will override configuration file for the Vault Helm chart. These overrides ensure turning on encryption, High Availability, setting up larger time for readinessProbe and exposing the UI as LoadBalancer service type:</p> <p>vault-values.yaml</p> <pre><code># Vault Helm Chart Value Overrides\nglobal:\n enabled: true\n tlsDisable: false\n\ninjector:\n enabled: true\n image:\n repository: \"hashicorp/vault-k8s\"\n tag: \"0.14.1\"\n\n resources:\n requests:\n memory: 500Mi\n cpu: 500m\n limits:\n memory: 1000Mi\n cpu: 1000m\n\nserver:\n # These Resource Limits are in line with node requirements in the\n # Vault Reference Architecture for a Small Cluster\n\n image:\n repository: \"hashicorp/vault\"\n tag: \"1.9.2\"\n\n # For HA configuration and because we need to manually init the vault,\n # we need to define custom readiness/liveness Probe settings\n readinessProbe:\n enabled: true\n path: \"/v1/sys/health?standbyok=true&amp;sealedcode=204&amp;uninitcode=204\"\n livenessProbe:\n enabled: true\n path: \"/v1/sys/health?standbyok=true\"\n initialDelaySeconds: 360\n\n extraEnvironmentVars:\n VAULT_CACERT: /vault/userconfig/tls-ca/tls.crt\n\n # extraVolumes is a list of extra volumes to mount. These will be exposed\n # to Vault in the path `/vault/userconfig/&lt;name&gt;/`.\n # These reflect the Kubernetes vault and ca secrets created\n extraVolumes:\n - type: secret\n name: tls-server\n - type: secret\n name: tls-ca\n\n standalone:\n enabled: false\n\n # Run Vault in \"HA\" mode.\n ha:\n enabled: true\n replicas: 3\n config: |\n ui = true\n\n listener \"tcp\" {\n tls_disable = 0\n address = \"0.0.0.0:8200\"\n tls_cert_file = \"/vault/userconfig/tls-server/tls.crt\"\n tls_key_file = \"/vault/userconfig/tls-server/tls.key\"\n tls_min_version = \"tls12\"\n }\n storage \"consul\" {\n path = \"vault\"\n address = \"consul-consul-server:8500\"\n }\n\n# Vault UI\nui:\n enabled: true\n serviceType: \"LoadBalancer\"\n serviceNodePort: null\n externalPort: 8200\n</code></pre> <p>Then run the installation:</p> <pre><code>helm install vault hashicorp/vault -n vault -f vault-values.yaml\n</code></pre> <p>As a result, several pods get created:</p> <pre><code>kubectl get pods -n vault\nNAME READY STATUS RESTARTS AGE\nconsul-consul-client-655fq 1/1 Running 0 104s\nconsul-consul-client-dkngt 1/1 Running 0 104s\nconsul-consul-client-nnbnl 1/1 Running 0 104s\nconsul-consul-connect-injector-8447d8d97b-8hkj8 1/1 Running 0 104s\nconsul-consul-server-0 1/1 Running 0 104s\nconsul-consul-webhook-cert-manager-7c4ccbdd4c-d89bw 1/1 Running 0 104s\nvault-0 1/1 Running 0 23s\nvault-1 1/1 Running 0 23s\nvault-2 1/1 Running 0 23s\nvault-agent-injector-6c7cfc768-kv968 1/1 Running 0 23s\n</code></pre>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#sealing-and-unsealing-the-vault","title":"Sealing and unsealing the Vault\ud83d\udd17","text":"<p>Right after the installation, Vault server starts in a sealed state. It knows where and how to access the physical storage but, by design, it is lacking the key to decrypt any of it. The only operations you can do when Vault is sealed are to</p> <ul> <li>unseal Vault and</li> <li>check the status of the seal.</li> </ul> <p>The reverse process, called unsealing, consists of creating the plaintext root key necessary to read the decryption key.</p> <p>In real life, there would be an administrator who could first generate the so-called key shares or unseal keys, which is a set of exactly five text strings. Then they would disperse these keys to two or more people, so that the secrets would be hard to gather for a potential attacker. And to perform the unsealing, at least three out of those five strings would have to be presented to the Vault, in any order.</p> <p>In this article, however, you are both the administrator and the user and can set up things your way. First you will</p> <ul> <li>generate the keys and have them available in plain sight and then you will</li> <li>enter three out of those five strings back to the system.</li> </ul> <p>You will have a limited but sufficient amount of time to enter the keys; the value livenessProbe in file vault-values.yaml is 360 seconds, which will give you ample time to enter the keys.</p> <p>At the end of the article we show how to interactively set it to 60 seconds, so that the cluster can check health of the pods more frequently.</p>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#step-5-unseal-vault","title":"Step 5 Unseal Vault\ud83d\udd17","text":"<p>Three nodes in the Kubernetes cluster represent Vault and are named vault-0, vault-1, vault-2. To make the Vault functional, you will have to unseal all three of them.</p> <p>To start, enter the container in vault-0:</p> <pre><code>kubectl -n vault exec -it vault-0 -- sh\n</code></pre> <p>Then from inside the pod, get the keys:</p> <pre><code>vault operator init\n</code></pre> <p>The result will be the following, you will get the 5 unseal keys and a root token. Save these keys to Notepad, so you have convenient access to them later:</p> <pre><code>Unseal Key 1: jcJj2ukVBNG5K01PX3UkskPotc+tGAvalG5CqBveS6LN\nUnseal Key 2: OBzqfTYL9lmmvuewk85kPxpgc0D/CDVXrY9cdBElA3hJ\nUnseal Key 3: M6QysiGixui4SlqB7Jdgv0jaHn8m45V91iabrxRvNo6v\nUnseal Key 4: H7T5BHR2isbBSHfu2q4aKG0hvvA13uXlT9799whxmuL+\nUnseal Key 5: rtbXv3TqdUeN3luelJa8OOI/CKlILANXxFVkyE/SKv4c\n\nInitial Root Token: s.Pt7xVk5rShSuIJqRPqBFWY5H\n</code></pre> <p>Then, from within the pod vault-0, unseal it by typing:</p> <pre><code>vault operator unseal\n</code></pre> <p>You will get prompted for the key, then paste key 1 from your notepad. Repeat this process 3 times in the vault-0 pod, each time providing a different key out of those five you have just generated.</p> <p>This is what the entire process looks like:</p> <p></p> <p>In third attempt, the values change to Initialized to be true and sealed to be false:</p> <pre><code>Key Value\n--- -----\nSeal Type shamir\nInitialized true\nSealed false\n... ...\n</code></pre> <p>The pod is unsealed.</p> <p>Now repeat the same process for vault-1 and vault-2 pods.</p> <p>To stop using the console in vault-0, press Ctrl-D on keyboard. Then enter vault-1 with command</p> <pre><code>kubectl -n vault exec -it vault-1 -- sh\n</code></pre> <p>and unseal it by entering at least three keys. Then the similar procedure for vault-2. Only when all three pods are unsealed will the Vault become active.</p>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#step-6-run-vault-ui","title":"Step 6 Run Vault UI\ud83d\udd17","text":"<p>With our configuration, Vault UI is exposed on port 8200 of a dedicated LoadBalancer that got created.</p> <p>To check the LoadBalancer, run:</p> <pre><code>kubectl -n vault get svc\n</code></pre> <p>Check the external IP of the LoadBalancer (it could take a couple of minutes when external IP is available):</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n...\nvault-ui LoadBalancer 10.254.49.9 64.225.129.145 8200:32091/TCP 143m\n</code></pre> <p>Type the external IP to the browser, specifying HTTPS and port 8200. The site may ask you for the certificate and can complain that there is a risk of proceeding. You should accept all the risks and see that Vault UI is available, similar to the image below. To login, provide the token which you obtained earlier:</p> <p></p> <p>You can now start using the Vault.</p> <p></p>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#return-livenessprobe-to-production-value","title":"Return livenessProbe to production value\ud83d\udd17","text":"<p>livenessProbe in Kubernetes is time in which the system checks the health of the nodes. That would normally not be a concern of yours but if you do not unseal the Vault within that amount of time, the unsealing won\u2019t work. Under normal circumstances, the value would be 60 seconds so that in case of any disturbance, the system would react within one minute instead of six. But it is very hard to copy and enter three strings under one minute as would happen if the value of 60 were present in file vault-values.yaml. You would almost inevitably see Kubernetes error 137, meaning that you did not perform the required operations in time.</p> <p>In file vault-values.yaml the following section defined 360 seconds as the time for activating the livenessProbe:</p> <pre><code>livenessProbe:\n enabled: true\n path: \"/v1/sys/health?standbyok=true\"\n initialDelaySeconds: 360\n</code></pre> <p>To return the value of livenessProbe to 60, execute the command:</p> <pre><code>kubectl edit statefulset vault -n vault\n</code></pre> <p>You can now access the equivalent of file vault-values.yaml inside the Kubernetes cluster. The command will automatically enter a Vim-like editor so press the O key on the keyboard in order to be able to change the value with it:</p> <p></p> <p>When done, save and leave Vim with the standard :w and :q syntax.</p>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#troubleshooting","title":"Troubleshooting\ud83d\udd17","text":"<p>Check the events, which can point out hints of what needs to be improved:</p> <pre><code>kubectl get events -n vault\n</code></pre> <p>If there are errors and you want to delete Vault installation in order to repeat the process from a clean slate, note that MutatingWebhookConfiguration might be left in the default namespace. Delete it prior to trying again:</p> <pre><code>kubectl get MutatingWebhookConfiguration\n\nkubectl delete MutatingWebhookConfiguration consul-consul-connect-injector\nkubectl delete MutatingWebhookConfiguration vault-agent-injector-cfg\n</code></pre>"},{"location":"kubernetes/Installing-HashiCorp-Vault-on-3Engines-Cloud-Magnum.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Now you have Vault server as a part of the cluster and you can also use it from the IP address it got installed to.</p> <p>Another way to improve Kubernetes security is securing applications with HTTPS using ingress:</p> <p>Deploying HTTPS Services on Magnum Kubernetes in 3Engines Cloud Cloud.</p>"},{"location":"kubernetes/Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html","title":"Installing JupyterHub on Magnum Kubernetes Cluster in 3Engines Cloud Cloud\ud83d\udd17","text":"<p>Jupyter notebooks are a popular method of presenting application code, as well as running exploratory experiments and analysis, conveniently, from a web browser. From a Jupyter notebook, one can run code, see the generated results in attractive visual form, and often also interactively interact with the generated output.</p> <p>JupyterHub is an open-source service that creates cloud-based Jupyter notebook servers, on-demand, enabiling users to run their notebooks without being concerned about the setup and required resources.</p> <p>It is straightforward to quickly deploy JupyterHub using Magnum Kubernetes service, which we present in this article.</p>"},{"location":"kubernetes/Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#what-we-are-going-to-cover","title":"What We are Going to Cover\ud83d\udd17","text":"<ul> <li>Authenticate to the cluster</li> <li>Run Jupyterhub Helm chart installation</li> <li>Retrieve details of Jupyterhub service</li> <li>Run Jupyterhub on HTTPS</li> </ul>"},{"location":"kubernetes/Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 kubectl up and running</p> <p>For further instructions refer to How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p> <p>No. 3 Helm up and running</p> <p>Helm is package manager for Kubernetes as explained in article</p> <p>Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</p> <p>No. 4 A registered domain name available</p> <p>To see the results of the installation, you should have a registered domain of your own. You will use it in Step 5 to run JupyterHub on HTTPS in a browser.</p>"},{"location":"kubernetes/Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#step-1-authenticate-to-the-cluster","title":"Step 1 Authenticate to the cluster\ud83d\udd17","text":"<p>First of all, we need to authenticate to the cluster. It may so happen that you already have a cluster at your disposal and that the config file is already in place. In other words, you are able to execute the kubectl command immediately.</p> <p>You may also create a new cluster and call it, say, jupyter-cluster, as explained in Prerequisite No. 2. In that case, run from your local machine the following command to create config file in the present working directory:</p> <pre><code>openstack coe cluster config jupyter-cluster\n</code></pre> <p>This will output the command to set the KUBECONFIG env, which is a variable pointing to the location of your newly created cluster e.g.</p> <pre><code>export KUBECONFIG=/home/eouser/config\n</code></pre> <p>Run this command.</p>"},{"location":"kubernetes/Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#step-2-apply-preliminary-configuration","title":"Step 2 Apply preliminary configuration\ud83d\udd17","text":"<p>OpenStack Magnum by default applies certain security restrictions for pods running on the cluster, in line with \u201cleast privileges\u201d practice. JupyterHub will require some additional privileges in order to run correctly.</p> <p>We will start by creating a dedicated namespace for our JupyterHub Helm artifacts:</p> <pre><code>kubectl create namespace jupyterhub\n</code></pre> <p>The next step is to create a RoleBinding that will add a magnum:podsecuritypolicy:privileged ClusterRole to the ServiceAccount which will be later deployed by JupyterHub Helm chart in the jupyterhub namespace. This role will enable additional privileges to this Service Account. Create a file jupyterhub-rolebinding.yaml with the following contents:</p> <p>jupyterhub-rolebinding.yaml</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: jupyterhub-rolebinding\n namespace: jupyterhub\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n kind: Group\n name: system:serviceaccounts\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: magnum:podsecuritypolicy:privileged\n</code></pre> <p>Then apply with:</p> <pre><code>kubectl apply -f jupyterhub-rolebinding.yaml\n</code></pre>"},{"location":"kubernetes/Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#step-3-run-jupyterhub-helm-chart-installation","title":"Step 3 Run Jupyterhub Helm chart installation\ud83d\udd17","text":"<p>To install Helm chart with the default settings use the below set of commands. This will</p> <ul> <li>download and update the JupyterHub repository, and</li> <li>install the chart to the jupyterhub namespace.</li> </ul> <pre><code>helm repo add jupyterhub https://hub.jupyter.org/helm-chart/\nhelm repo update\nhelm install jupyterhub jupyterhub/jupyterhub --version 2.0.0 --namespace jupyterhub\n</code></pre> <p>This is the result of successful Helm chart installation:</p> <p></p>"},{"location":"kubernetes/Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#step-4-retrieve-details-of-your-service","title":"Step 4 Retrieve details of your service\ud83d\udd17","text":"<p>Once all the Helm resources get deployed to the jupyterhub namespace, we can view their state and definitions using standard kubectl commands.</p> <p>To view the services resource created by Helm, execute the following command:</p> <pre><code>kubectl get services -n jupyterhub\n</code></pre> <p>There are several resources created and a few services. The one most interesting to us is the proxy-public service of type LoadBalancer, which exposes JupyterHub to the public network:</p> <pre><code>$ kubectl get services -n jupyterhub\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nhub ClusterIP 10.254.209.133 &lt;none&gt; 8081/TCP 18d\nproxy-api ClusterIP 10.254.86.239 &lt;none&gt; 8001/TCP 18d\nproxy-public LoadBalancer 10.254.168.141 64.225.131.136 80:31027/TCP 18d\n</code></pre> <p>The External IP of the proxy-public service will be initially in state. Refresh this command and after 2-5 minutes, you will see the floating IP assigned to the service. You can then type this IP into the browser. <p>First, you will enter the login screen. Provide any combination of dummy login and password, after a moment JupyterHub gets loaded into the browser:</p> <p></p> <p>JupyterHub is now working on HTTP and a direct IP address and you can use it as is.</p> <p>Warning</p> <p>If in the next step you start running a JupyterHub on HTTPS, you will not be able to run it as a HTTP service unless it has been relaunched.</p>"},{"location":"kubernetes/Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#step-5-run-on-https","title":"Step 5 Run on HTTPS\ud83d\udd17","text":"<p>JupyterHub Helm chart enables HTTPS deployments natively. Once we deployed the chart above, we can simply upgrade the chart to enable serving it on HTTPS. Under the hood, it will generate the certificates using Let\u2019s Encrypt certificate authority.</p> <p>In order to do enable HTTPS, prepare a file for the configuration override e.g. jupyter-https-values.yaml with the following contents (adjust the email and domain to your own):</p> <p>jupyter-https-values.yaml</p> <pre><code>proxy:\n https:\n enabled: true\n hosts:\n - mysampledomain.info\n letsencrypt:\n contactEmail: [email\u00a0protected]\n</code></pre> <p>Then upgrade the chart with the following upgrade command:</p> <pre><code>helm upgrade -n jupyterhub jupyterhub jupyterhub/jupyterhub -f jupyter-https-values.yaml\n</code></pre> <p>As noted in Prerequisite No. 4, you should have an available registered domain so that you can now point it to address that the LoadBalancer for service proxy-public returned above. Please ensure that the records in your domain registrar are correctly associated. Concretely, we\u2019ve associated the A record set of mysampledomain.info with the record 64.225.131.136 (the public IP address or our service). Once this is done, the JupyterHub gets served on HTTPS:</p> <p></p>"},{"location":"kubernetes/Installing-JupyterHub-on-Magnum-Kubernetes-cluster-in-3Engines-Cloud-cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>For the production environment: replace the dummy authenticator with an alternative authentication mechanism, ensure persistence by e.g. connecting to a Postgres database. These steps are beyond the scope of this article.</p>"},{"location":"kubernetes/Kubernetes-cluster-observability-with-Prometheus-and-Grafana-on-3Engines-Cloud.html.html","title":"Kubernetes cluster observability with Prometheus and Grafana on 3Engines Cloud\ud83d\udd17","text":"<p>Complex systems deployed on Kubernetes take advantage of multiple Kubernetes resources. Such deployments often consist of a number of namespaces, pods and many other entities, which contribute to consuming the cluster resources.</p> <p>To allow proper insight into how the cluster resources are utilized, and enable optimizing their use, one needs a functional cluster observability setup.</p> <p>In this article we will present the use of a popular open-source observability stack consisting of Prometheus and Grafana.</p>"},{"location":"kubernetes/Kubernetes-cluster-observability-with-Prometheus-and-Grafana-on-3Engines-Cloud.html.html#what-are-we-going-to-cover","title":"What Are We Going To Cover\ud83d\udd17","text":"<ul> <li>Install Prometheus</li> <li>Install Grafana</li> <li>Access Prometheus as datasource to Grafana</li> <li>Add cluster observability dashboard</li> </ul>"},{"location":"kubernetes/Kubernetes-cluster-observability-with-Prometheus-and-Grafana-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 A cluster created on cloud</p> <p>Kubernetes cluster available. For guideline on creating a Kubernetes cluster refer to How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum.</p> <p>No. 3 Familiarity with Helm</p> <p>For more information on using Helm and installing apps with Helm on Kubernetes, refer to Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</p> <p>No. 4 Access to kubectl command line</p> <p>The instructions for activation of kubectl are provided in: How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p>"},{"location":"kubernetes/Kubernetes-cluster-observability-with-Prometheus-and-Grafana-on-3Engines-Cloud.html.html#1-install-prometheus-with-helm","title":"1. Install Prometheus with Helm\ud83d\udd17","text":"<p>Prometheus is an open-source monitoring and alerting toolkit, widely used in System Administration and DevOps domains. Prometheus comes with a timeseries database, which can store metrics generated by variety of other systems and software tools. It provides a query language called PromQL to efficiently access this data. In our case, we will use Prometheus to get access to the metrics generated by our Kubernetes cluster.</p> <p>We will use the Prometheus distribution delivered via the Bitnami, so the first step is to download Bitnami to our local Helm repository cache. To do so, type in the following command:</p> <pre><code>helm repo add bitnami https://charts.bitnami.com/bitnami\n</code></pre> <p>Next, download the Prometheus Helm chart:</p> <pre><code>helm install prometheus bitnami/kube-prometheus\n</code></pre> <p>With the above commands correctly applied, the result should be similar to the following:</p> <pre><code>NAME: prometheus\nLAST DEPLOYED: Thu Nov 2 09:22:38 2023\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\nNOTES:\nCHART NAME: kube-prometheus\nCHART VERSION: 8.21.2\nAPP VERSION: 0.68.0\n</code></pre> <p>Note that we are deploying the Helm chart to the default namespace for simplicity. For production, you might consider using a dedicated namespace.</p> <p>Behind the scenes, several Prometheus pods are launched by the chart, which can be verified as follows:</p> <pre><code>kubectl get pods\n\n...\n\nNAME READY STATUS RESTARTS AGE\nalertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 2m39s\nprometheus-kube-prometheus-blackbox-exporter-5cf8597545-22wxc 1/1 Running 0 2m51s\nprometheus-kube-prometheus-operator-69584c98f-7wwrg 1/1 Running 0 2m51s\nprometheus-kube-state-metrics-db4f67c5c-h77lb 1/1 Running 0 2m51s\nprometheus-node-exporter-8twzf 1/1 Running 0 2m51s\nprometheus-node-exporter-sc8d7 1/1 Running 0 2m51s\nprometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 2m39s\n</code></pre> <p>Similarily, several dedicated Kubernetes services are also deployed. The service prometheus-kube-prometheus-prometheus exposes the Prometheus dashboard. To access this service in the browser on default port 9090, type in the following command:</p> <pre><code>kubectl port-forward svc/prometheus-kube-prometheus-prometheus 9090:9090\n</code></pre> <p>Then access your browser via localhost:9090 to see the result similar to the following:</p> <p></p> <p>Notice, when you start typing kube in the search results, the autocomplete suggest some of the metrics that are available from our Kubernetes cluster. Along with the Helm chart installation, these metrics got exposed to Prometheus, so they are stored in Prometheus database and can be queried for.</p> <p></p> <p>You can select one of the metrics and hit Execute button to process the query for statistics of this metrics. For example, insert the following expression</p> <pre><code>kube_pod_info{namespace=\"default\"}\n</code></pre> <p>to query for all pods in the default namespace. (Further elaboration about the capabilities of Prometheus GUI and PromQL syntax is beyond the scope of this article.)</p> <p></p>"},{"location":"kubernetes/Kubernetes-cluster-observability-with-Prometheus-and-Grafana-on-3Engines-Cloud.html.html#2-install-grafana","title":"2. Install Grafana\ud83d\udd17","text":"<p>The next step is to install Grafana. We already added the Bitnami repository when installing Prometheus, so Grafana repository was also added to our local cache. We only need to install Grafana.</p> <p>Note that if you want to keep an active browser session of Prometheus from the previous step, you will need to start another Linux terminal to proceed with the below installation guideline.</p> <p>By default, Grafana chart will be installed with a random auto-generated admin password. We can overwrite one of the Helm settings to define our own password, in this case: ownpassword, for simplicity of the demo:</p> <pre><code>helm install grafana bitnami/grafana --set admin.password=ownpassword\n</code></pre> <p>If you prefer to stick to the defaults, instead of the above command, use the following commands to install the chart and extract the auto-generated password:</p> <pre><code>helm install grafana bitnami/grafana\necho \"Password: $(kubectl get secret grafana-admin --namespace default -o jsonpath=\"{.data.GF_SECURITY_ADMIN_PASSWORD}\" | base64 -d)\"\n</code></pre> <p>There will be a single pod generated by the chart installation. Ensure to wait until this pod is ready before proceeding with the further steps:</p> <pre><code>kubectl get pods\n\nNAME READY STATUS RESTARTS AGE\n...\ngrafana-fb6877dbc-5jvjc 1/1 Running 0 65s\n...\n</code></pre> <p>Now, similarly as with Prometheus we can access Grafana dashboard locally in the browser via the port-forward command:</p> <pre><code>kubectl port-forward svc/grafana 8080:3000\n</code></pre> <p>Then access the Grafana dashboard by entering localhost:8080 in the browser:</p> <p></p> <p>Type the login: admin and the password ownpassword (or the auto-generated password you extracted in the earlier step).</p>"},{"location":"kubernetes/Kubernetes-cluster-observability-with-Prometheus-and-Grafana-on-3Engines-Cloud.html.html#3-add-prometheus-as-datasource-to-grafana","title":"3. Add Prometheus as datasource to Grafana\ud83d\udd17","text":"<p>In this step we will setup Grafana to use our Prometheus installation as a datasource.</p> <p>To proceed, click on Home menu in the left upper corner of Grafana UI, select Connections and then Data sources:</p> <p></p> <p>Then select Add data source and choose Prometheus as datasource type. You will enter the following screen:</p> <p></p> <p>Just change \u201cPrometheus server URL\u201d field to http://prometheus-kube-prometheus-prometheus.default.svc.cluster.local:9090 which represents the address of the Prometheus Kubernetes service in charge of exposing the metrics.</p> <p>Hit the Save and test button. If all went well, you will see the following screen:</p> <p></p>"},{"location":"kubernetes/Kubernetes-cluster-observability-with-Prometheus-and-Grafana-on-3Engines-Cloud.html.html#4-add-cluster-observability-dashboard","title":"4. Add cluster observability dashboard\ud83d\udd17","text":"<p>We could be building a Kubernetes observability dashboard from the scratch, but we will much rather utilize one of the open-source dashboards already available.</p> <p>To proceed, select the Dashboards section from the collapsible menu in top left corner and click Import:</p> <p></p> <p>Then in the import via grafana.com field, enter 10000, which is the ID of the Kubernetes observability dashboard from the grafana.com marketplace represented in: https://grafana.com/grafana/dashboards/10000-kubernetes-cluster-monitoring-via-prometheus/</p> <p></p> <p>Then another screen appears as per below. Change data source to Prometheus and hit Import button:</p> <p></p> <p>As the result, the Grafana Kubernetes observability dashboard gets populated:</p> <p></p>"},{"location":"kubernetes/Kubernetes-cluster-observability-with-Prometheus-and-Grafana-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You can find and import many other dashboards for Kubernetes observability by browsing https://grafana.com/grafana/dashboards/. Some examples are dashboards with IDs: 315, 15758, 15761 or many more.</p> <p>The following article shows another approach to creating a Kubernetes dashboard:</p> <p>Using Dashboard To Access Kubernetes Cluster Post Deployment On 3Engines Cloud OpenStack Magnum</p>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html","title":"Private container registries with Harbor on 3Engines Cloud Kubernetes\ud83d\udd17","text":"<p>A fundamental component of the container-based ecosystem are container registries, used for storing and distributing container images. There are a few popular public container registries, which serve this purpose in a software-as-a-service model and the most popular is DockerHub.</p> <p>In this article, we are using Harbor, which is a popular open-source option for running private registries. It is compliant with OCI (Open Container Initiative), which makes it suitable to work with standard container images. It ships with multiple enterprise-ready features out of the box.</p>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#benefits-of-using-your-own-private-container-registry","title":"Benefits of using your own private container registry\ud83d\udd17","text":"<p>When you deploy your own private container registry, the benefits would be, amongst others:</p> <ul> <li>full control of the storage of your images and the way of accessing them</li> <li>privacy for proprietary and private images</li> <li>customized configuration for logging, authentication etc.</li> </ul> <p>You can also use Role-based access control on Harbor project level to specify and enforce which users have permission to publish updated images, to consume the available ones and so on.</p>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Deploy Harbor private registry with Bitnami-Harbor Helm chart</li> <li>Access Harbor from browser</li> <li>Associate the A record of your domain to Harbor\u2019s IP address</li> <li>Create a project in Harbor</li> <li>Create a Dockerfile for our custom image</li> <li>Ensure trust from our local Docker instance</li> <li>Build our image locally</li> <li>Upload a Docker image to your Harbor instance</li> <li>Download a Docker image from your Harbor instance</li> </ul>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 A cluster on 3Engines-Cloud cloud</p> <p>A Kubernetes cluster on 3Engines Cloud cloud. Follow guidelines in this article How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum.</p> <p>No. 3 kubectl operational</p> <p>kubectl CLI tool installed and pointing to your cluster via KUBECONFIG environment variable. Article How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum provides further guidance.</p> <p>No. 4 Familiarity with deploying Helm charts</p> <p>See this article:</p> <p>Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</p> <p>No. 5 Domain purchased from a registrar</p> <p>You should own a domain, purchased from any registrar (domain reseller). Obtaining a domain from registrars is not covered in this article.</p> <p>No. 6 Use DNS service in Horizon to link Harbor service to the domain name</p> <p>This is optional. Here is the article with detailed information:</p> <p>DNS as a Service on 3Engines Cloud Hosting</p> <p>No. 7 Docker installed on your machine</p> <p>See How to install and use Docker on Ubuntu 24.04.</p>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#deploy-harbor-private-registry-with-bitnami-harbor-helm-chart","title":"Deploy Harbor private registry with Bitnami-Harbor Helm chart\ud83d\udd17","text":"<p>The first step to deploy Harbor private registry is to create a dedicated namespace to host Harbor artifacts:</p> <pre><code>kubectl create ns harbor\n</code></pre> <p>Then we add Bitnami repository to Helm:</p> <pre><code>helm repo add bitnami https://charts.bitnami.com/bitnami\n</code></pre> <p>We will then prepare a configuration file, which we can use to control various parameters of our deployment. If you want to have a view of all possible configuration parameters, you can download the default configuration values.yaml:</p> <pre><code>helm show values bitnami/harbor &gt; values.yaml\n</code></pre> <p>You can then see the configuration parameters with</p> <pre><code>cat values.yaml\n</code></pre> <p>Otherwise to proceed with the article, use nano editor to create new file harbor-values.yaml</p> <pre><code>nano harbor-values.yaml\n</code></pre> <p>and paste the following contents:</p> <pre><code>externalURL: mysampledomain.info\nnginx:\n tls:\n commonName: mysampledomain.info\nadminPassword: Harbor12345\n</code></pre> <p>These settings deploy Harbor portal as a service of LoadBalancer type, and the SSL termination is delegated to NGINX that gets deployed along as a Kubernetes pod.</p> <p>Warning</p> <p>We use mysampledomain.info for demonstration purposes only. Please replace this with a real domain you own while running the code in this article.</p> <p>For demonstration we also use a simple password, which can be replaced after the initial login.</p> <p>Now install the chart with the following command:</p> <pre><code>helm install harbor bitnami/harbor --values harbor-values.yaml -n harbor\n</code></pre> <p>The output should be similar to the following:</p> <pre><code>NAME: harbor\nLAST DEPLOYED: Tue Aug 1 15:48:44 2023\nNAMESPACE: harbor-bitnami\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\nNOTES:\nCHART NAME: harbor\nCHART VERSION: 16.6.5\nAPP VERSION: 2.8.1\n\n** Please be patient while the chart is being deployed **\n\n1. Get the Harbor URL:\n\n NOTE: It may take a few minutes for the LoadBalancer IP to be available.\n Watch the status with: 'kubectl get svc --namespace harbor-bitnami -w harbor'\n export SERVICE_IP=$(kubectl get svc --namespace harbor-bitnami harbor --template \"{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}\")\n echo \"Harbor URL: http://$SERVICE_IP/\"\n\n2. Login with the following credentials to see your Harbor application\n\n echo Username: \"admin\"\n echo Password: $(kubectl get secret --namespace harbor-bitnami harbor-core-envvars -o jsonpath=\"{.data.HARBOR_ADMIN_PASSWORD}\" | base64 -d)\n</code></pre>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#access-harbor-from-browser","title":"Access Harbor from browser\ud83d\udd17","text":"<p>With the previous steps followed, you should be able to access the Harbor portal. The following command will display all of the services deployed:</p> <pre><code>kubectl get services -n harbor\n</code></pre> <p>Here they are:</p> <pre><code>$ kubectl get services -n harbor-bitnami\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nharbor LoadBalancer 10.254.208.73 64.225.133.148 80:32417/TCP,443:31448/TCP,4443:31407/TCP 4h2m\nharbor-chartmuseum ClusterIP 10.254.11.204 &lt;none&gt; 80/TCP 4h2m\nharbor-core ClusterIP 10.254.209.231 &lt;none&gt; 80/TCP 4h2m\nharbor-jobservice ClusterIP 10.254.228.203 &lt;none&gt; 80/TCP 4h2m\nharbor-notary-server ClusterIP 10.254.189.61 &lt;none&gt; 4443/TCP 4h2m\nharbor-notary-signer ClusterIP 10.254.81.205 &lt;none&gt; 7899/TCP 4h2m\nharbor-portal ClusterIP 10.254.217.77 &lt;none&gt; 80/TCP 4h2m\nharbor-postgresql ClusterIP 10.254.254.0 &lt;none&gt; 5432/TCP 4h2m\nharbor-postgresql-hl ClusterIP None &lt;none&gt; 5432/TCP 4h2m\nharbor-redis-headless ClusterIP None &lt;none&gt; 6379/TCP 4h2m\nharbor-redis-master ClusterIP 10.254.137.87 &lt;none&gt; 6379/TCP 4h2m\nharbor-registry ClusterIP 10.254.2.234 &lt;none&gt; 5000/TCP,8080/TCP 4h2m\nharbor-trivy ClusterIP 10.254.249.99 &lt;none&gt; 8080/TCP 4h2m\n</code></pre> <p>Explaining the purpose of several artifacts is beyond the scope of this article. The key service that is interesting to us at this stage is harbor, which got deployed as LoadBalancer type with public IP 64.225.134.148.</p>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#associate-the-a-record-of-your-domain-to-harbors-ip-address","title":"Associate the A record of your domain to Harbor\u2019s IP address\ud83d\udd17","text":"<p>The final step is to associate the A record of your domain to the Harbor\u2019s IP address.</p> Create or edit the A record through your domain registrar The exact steps will vary from one registrar to another so explaining them is out of scope of this article. Create or edit the A record through the DNS as a service available in your 3Engines Cloud account This is explained in Prerequisite No. 6. Use commands DNS \u2013&gt; Zones and select the name of the site you are using instead of mysampledomain.info, then click on Record Sets. In column Type, there will be type A - Address record and click on Update field on the right side to enter or change the value in that row: <p></p> <p>In this screenshot, the value 64.225.134.148 is already entered into that Update field \u2013 you will, of course, here supply your own IP value instead.</p> <p>With the above steps completed, you can access harbor from the expected URL, in our case: https://mysampledomain.info. Since the chart generated self-signed certificates, you will first need to accept the \u201cNot Secure\u201d warning provided by the browser:</p> <p></p> <p>Note</p> <p>This warning will vary from one browser to another.</p> <p>To log in to your instance, use these as the login details</p> login admin password Harbor12345"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#create-a-project-in-harbor","title":"Create a project in Harbor\ud83d\udd17","text":"<p>When you log in to Harbor, you enter the Projects section:</p> <p></p> <p>A project in Harbor is a separate space where containers can be placed. An image needs to be placed in a scope of a specific project. As a Harbor admin, you can also apply Role-Based Access Control on the Harbor project level, so that only specific users can access or perform certain operations within a scope of a given project.</p> <p>To create a new project, click on New Project button. In this article, we will upload a public image that can be accessed by anyone, and let it be called simply myproject:</p> <p></p>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#create-a-dockerfile-for-our-custom-image","title":"Create a Dockerfile for our custom image\ud83d\udd17","text":"<p>The Harbor service is running and we can use it to upload our Docker images. We will generate a minimal image, so just create an empty folder, called helloharbor, with a single Docker file (called Dockerfile)</p> <p>Dockerfile</p> <pre><code>mkdir helloharbor\ncd helloharbor\nnano Dockerfile\n</code></pre> <p>and its contents be:</p> <pre><code>FROM alpine\nCMD [\"/bin/sh\", \"-c\", \"echo 'Hello Harbor!'\"]\n</code></pre>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#ensure-trust-from-our-local-docker-instance","title":"Ensure trust from our local Docker instance\ud83d\udd17","text":"<p>In order to build our Docker image in further steps and upload this image to Harbor, we need to ensure communication of our local Docker instance with Harbor. To fulfill this objective, proceed as follows:</p>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#ensure-docker-trust-step-1-bypass-docker-validating-the-domain-certificate","title":"Ensure Docker trust - Step 1. Bypass Docker validating the domain certificate\ud83d\udd17","text":"<p>Bypass Docker validating the domain certificate pointing to the domain where Harbor is running. Docker would not trust this certificate, because it is self-signed. To bypass this validation, create a file called daemon.json in /etc/docker directory on your local machine:</p> <pre><code>sudo chmod 777 /etc/docker\n</code></pre> <p>You are using sudo so will be asked to supply the password. Now create the file:</p> <pre><code>nano /etc/docker/daemon.json\n</code></pre> <p>and fill in with this content, then save with Ctrl-X, Y:</p> <pre><code>{\n \"insecure-registries\" : [ \"mysampledomain.info\" ]\n}\n</code></pre> <p>As always, replace mysampledomain.info with your own domain.</p> <p>For production, you would rather set up proper HTTPS certificate for the domain.</p>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#ensure-docker-trust-step-2-ensure-docker-trusts-the-harbors-certificate-authority","title":"Ensure Docker trust - Step 2. Ensure Docker trusts the Harbor\u2019s Certificate Authority\ud83d\udd17","text":"<p>To do so, we download the ca.crt file from our Harbor portal instance from the myproject project view:</p> <p></p> <p>The exact way of installing the certificate will depend on the environment you are running Docker on:</p> Install the certificate on Linux <p>Create a nested directory path /etc/docker/certs.d/mysampledomain.info and to this folder upload the ca.crt file:</p> <pre><code>sudo mkdir -p /etc/docker/certs.d/mysampledomain.info\nsudo cp ~/ca.crt /etc/docker/certs.d/mysampledomain.info\n</code></pre> Install the certificate on WSL2 running on Windows 10 or 11 <p>In WSL2, you would need to upload the certificate to Windows ROOT CA store with the following sequence:</p> <ul> <li>Click on Start and type Manage Computer Certificates</li> <li>Right-click on Trusted Root Certification Authorities then All tasks and Import</li> <li>Browse to the ca.crt file location and then keep pressing Next to complete the wizard</li> <li>Restart Docker from Docker Desktop menu</li> </ul>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#ensure-docker-trust-step-3-restart-docker","title":"Ensure Docker trust - Step 3. Restart Docker\ud83d\udd17","text":"<p>Restart Docker with:</p> <pre><code>sudo systemctl restart docker\n</code></pre>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#build-our-image-locally","title":"Build our image locally\ud83d\udd17","text":"<p>After these steps, we can tag our image and build it locally (from the location where Dockerfile is placed):</p> <pre><code>docker build -t mysampledomain.info/myproject/helloharbor .\n</code></pre> <p>Next we can log in to the Harbor portal with our admin login and Harbor12345 password:</p> <pre><code>docker login mysampledomain.info\n</code></pre>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#upload-a-docker-image-to-your-harbor-instance","title":"Upload a Docker image to your Harbor instance\ud83d\udd17","text":"<p>Lastly, push the image to the repo:</p> <pre><code>docker push mysampledomain.info/myproject/helloharbor\n</code></pre> <p>The result will be similar to the following:</p> <p></p>"},{"location":"kubernetes/Private-container-registries-with-Harbor-on-3Engines-Cloud-Kubernetes.html.html#download-a-docker-image-from-your-harbor-instance","title":"Download a Docker image from your Harbor instance\ud83d\udd17","text":"<p>To demonstrate downloading images from our Harbor repository, we can first delete the local Docker image we created earlier.</p> <pre><code>docker image rm mysampledomain.info/myproject/helloharbor\n</code></pre> <p>To verify, view it is not on our local images list:</p> <pre><code>docker images\n</code></pre> <p>Then pull from Harbor remote:</p> <pre><code>docker pull mysampledomain.info/myproject/helloharbor\n</code></pre>"},{"location":"kubernetes/Sealed-Secrets-on-3Engines-Cloud-Kubernetes.html.html","title":"Sealed Secrets on 3Engines Cloud Kubernetes\ud83d\udd17","text":"<p>Sealed Secrets improve security of our Kubernetes deployments by enabling encrypted Kubernetes secrets. This allows to store such secrets in source control and follow GitOps practices of storing all configuration in code.</p> <p>In this article we will install tools to work with Sealed Secrets and demonstrate using Sealed Secrets on 3Engines Cloud cloud.</p>"},{"location":"kubernetes/Sealed-Secrets-on-3Engines-Cloud-Kubernetes.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Install the Sealed Secrets controller</li> <li>Install the kubeseal command line utility</li> <li>Create a sealed secret</li> <li>Unseal the secret</li> <li>Verify</li> </ul>"},{"location":"kubernetes/Sealed-Secrets-on-3Engines-Cloud-Kubernetes.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Understand Helm deployments</p> <p>To install Sealed Secrets on Kubernetes cluster, we will use the appropriate Helm chart. The following article explains the procedure:</p> <p>Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</p> <p>No. 3 Kubernetes cluster</p> <p>General explanation of how to create a Kubernetes cluster is here:</p> <p>How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</p> <p>For new cluster, using the latest version of the cluster template is always recommended. This article was tested with Kubernetes 1.25.</p> <p>No. 4 Access to cluster with kubectl</p> <p>How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</p>"},{"location":"kubernetes/Sealed-Secrets-on-3Engines-Cloud-Kubernetes.html.html#step-1-install-the-sealed-secrets-controller","title":"Step 1 Install the Sealed Secrets controller\ud83d\udd17","text":"<p>In order to use Sealed Secrets we will first install the Sealed Secrets controller to our Kubernetes cluster. We can use Helm for this purpose and the first step is to download the Helm repository. To add the repo locally use the following command:</p> <pre><code>helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets\n</code></pre> <p>The next step is to install the SealedSecrets controller chart. We need to install it to the namespace kube-system. Note we also override the name of the controller, so that it corresponds to the default name used by the CLI utility kubeseal which we will install in the following section.</p> <pre><code>helm install sealed-secrets -n kube-system --set-string fullnameOverride=sealed-secrets-controller sealed-secrets/sealed-secrets\n</code></pre> <p>The chart downloads several resources to our cluster. The key ones are:</p> <ul> <li>SealedSecret Custom Resource Definition (CRD) - defines the template for sealed secrets that will be created on the cluster</li> <li>The SealedSecrets controller pod running in the kube-system namespace.</li> </ul>"},{"location":"kubernetes/Sealed-Secrets-on-3Engines-Cloud-Kubernetes.html.html#step-2-install-the-kubeseal-command-line-utility","title":"Step 2 Install the kubeseal command line utility\ud83d\udd17","text":"<p>Kubeseal CLI tool is used for encrypting secrets using the public certificate of the controller. To proceed, install kubeseal with the following set of commands:</p> <pre><code>KUBESEAL_VERSION='0.23.0'\nwget \"https://github.com/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION:?}/kubeseal-${KUBESEAL_VERSION:?}-linux-amd64.tar.gz\"\ntar -xvzf kubeseal-${KUBESEAL_VERSION:?}-linux-amd64.tar.gz kubeseal\nsudo install -m 755 kubeseal /usr/local/bin/kubeseal\n</code></pre> <p>You can verify that kubeseal was properly installed by running:</p> <pre><code>kubeseal --version\n</code></pre> <p>which will return result similar to the following:</p> <p></p>"},{"location":"kubernetes/Sealed-Secrets-on-3Engines-Cloud-Kubernetes.html.html#step-3-create-a-sealed-secret","title":"Step 3 Create a sealed secret\ud83d\udd17","text":"<p>We can use Sealed Secrets to encrypt the secrets, which can be decrypted only by the controller running on the cluster.</p> <p>A sealed secret needs to be created based off a regular, unencrypted Kubernetes secret. However, we don\u2019t want to commit this base secret to our Kubernetes cluster. We also do not want to create a permanent file with the unencrypted secret contents, to avoid accidentally committing it to source control.</p> <p>Therefore we will use kubectl to create a regular secret only temporarily, using \u2013dry-run=client parameter. The secret has a key foo and value bar. kubectl outputs this temporary secret, we then pipe this output to kubeseal utility. kubeseal seals (encrypts) the secret and saves it to a file called sealed-secret.yaml.</p> <pre><code>kubectl create secret generic mysecret \\\n--dry-run=client \\\n--from-literal=foo=bar -o yaml | kubeseal \\\n--format yaml &gt; mysecret.yaml\n</code></pre> <p>When we view the file we can see the contents are encrypted and safe to store in source control.</p>"},{"location":"kubernetes/Sealed-Secrets-on-3Engines-Cloud-Kubernetes.html.html#step-4-unseal-the-secret","title":"Step 4 Unseal the secret\ud83d\udd17","text":"<p>To unseal the secret and make it available and usable in the cluster, we perform the following command:</p> <pre><code>kubectl create -f mysecret.yaml\n</code></pre> <p>This, after few seconds, generates a regular Kubernetes secret which is readable to our cluster. We can verify this with these two commands:</p> <pre><code>kubectl get secret mysecret -o yaml\necho YmFy | base64 --decode\n</code></pre> <p>The former command extracts output the yaml of the secret, while the latter decodes the value of the data stored under key foo which outputs the expected result: bar.</p> <p>The results can also be seen on the below screen:</p> <p></p>"},{"location":"kubernetes/Sealed-Secrets-on-3Engines-Cloud-Kubernetes.html.html#step-5-verify","title":"Step 5 Verify\ud83d\udd17","text":"<p>The generated secret can be used as a regular Kubernetes secret. To test, create a file test-pod.yaml with the following contents:</p> <p>test-pod.yaml</p> <pre><code>apiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\nspec:\n containers:\n - name: nginx\n image: nginx:latest\n env:\n - name: TEST_VAR\n valueFrom:\n secretKeyRef:\n name: mysecret\n key: foo\n</code></pre> <p>This launches a minimal pod called nginx which is based on nginx server container image. In the container inside the pod, we create an environment variable called TEST_VAR. The value of the variable is assigned from our secret mysecret by the available key foo. Apply the example with the following command:</p> <pre><code>kubectl apply -f test-pod.yaml\n</code></pre> <p>Then enter the container inside the nginx pod:</p> <pre><code>kubectl exec -it nginx -- sh\n</code></pre> <p>The command prompt will change to #, meaning the command you enter is executed inside the container. Execute the printenv command to see environment variables. We can see our variable TEST_VAR with the value bar, as expected:</p> <p></p>"},{"location":"kubernetes/Sealed-Secrets-on-3Engines-Cloud-Kubernetes.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Sealed Secrets present a viable alternative to secret management using additional tools such as HashiCorp-Vault. For additional information, see Installing HashiCorp Vault on 3Engines Cloud Magnum.</p>"},{"location":"kubernetes/Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-3Engines-Cloud-OpenStack-Magnum.html.html","title":"Using Dashboard To Access Kubernetes Cluster Post Deployment On 3Engines Cloud OpenStack Magnum\ud83d\udd17","text":"<p>After the Kubernetes cluster has been created, you can access it through command line tool, kubectl, or you can access it through a visual interface, called the Kubernetes dashboard. Dashboard is a GUI interface to Kubernetes cluster, much the same as kubectl as a CLI interface to the Kubernetes cluster.</p> <p>This article shows how to install Kubernetes dashboard.</p>"},{"location":"kubernetes/Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-3Engines-Cloud-OpenStack-Magnum.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Deploying the dashboard</li> <li>Creating a sample user</li> <li>Creating secret for admin-user</li> <li>Getting the bearer token for authentication to dashboard</li> <li>Creating a separate terminal window for proxy access</li> <li>Running the dashboard in browser</li> </ul>"},{"location":"kubernetes/Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-3Engines-Cloud-OpenStack-Magnum.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Cluster and kubectl should be already operational</p> <p>To eventually set up a cluster and connect it to the kubectl tool, see this article How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum.</p> <p>The important intermediary result of that article is a command like this:</p> <pre><code>export KUBECONFIG=/home/user/k8sdir/config\n</code></pre> <p>Note the exact command which in your case sets up the value of KUBECONFIG variable as you will need it to start a new terminal window from which the dashboard will run.</p>"},{"location":"kubernetes/Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-1-deploying-the-dashboard","title":"Step 1 Deploying the Dashboard\ud83d\udd17","text":"<p>Install it with the following command:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml\n</code></pre> <p>The result is</p> <p></p>"},{"location":"kubernetes/Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-2-creating-a-sample-user","title":"Step 2 Creating a sample user\ud83d\udd17","text":"<p>Next, you create a bearer token which will serve as an authorization token for the Dashboard. To that end, you will create two local files and \u201csend\u201d them to the cloud using the kubectl command. The first file is called dashboard-adminuser.yaml and its contents are</p> <pre><code>apiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: admin-user\n namespace: kubernetes-dashboard\n</code></pre> <p>Use a text editor of your choice to create that file, on MacOS or Linux you can use nano, like this:</p> <pre><code>nano dashboard-adminuser.yaml\n</code></pre> <p>Install that file on the Kubernetes cluster with this command:</p> <pre><code>kubectl apply -f dashboard-adminuser.yaml\n</code></pre> <p>The second file to create is</p> <pre><code>nano dashboard-clusterolebinding.yaml\n</code></pre> <p>and its contents should be:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: admin-user\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\nsubjects:\n- kind: ServiceAccount\n name: admin-user\n namespace: kubernetes-dashboard\n</code></pre> <p>The command to send it to the cloud is</p> <pre><code>kubectl apply -f dashboard-clusterolebinding.yaml\n</code></pre>"},{"location":"kubernetes/Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-3-create-secret-for-admin-user","title":"Step 3 Create secret for admin-user\ud83d\udd17","text":"<p>We have to manually create token for admin user.</p> <p>Create file admin-user-token.yaml</p> <pre><code>nano admin-user-token.yaml\n</code></pre> <p>Enter the following code:</p> <pre><code>apiVersion: v1\nkind: Secret\nmetadata:\n name: admin-user-token\n namespace: kubernetes-dashboard\n annotations:\n kubernetes.io/service-account.name: \"admin-user\"\ntype: kubernetes.io/service-account-token\n</code></pre> <p>Execute it with</p> <pre><code>kubectl apply -f admin-user-token.yaml\n</code></pre>"},{"location":"kubernetes/Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-4-get-the-bearer-token-for-authentication-to-dashboard","title":"Step 4 Get the bearer token for authentication to dashboard\ud83d\udd17","text":"<p>The final step is to get the bearer token, which is a long string that will authenticate calls to Dashboard:</p> <pre><code>kubectl -n kubernetes-dashboard get secret admin-user-token -o jsonpath=\"{.data.token}\" | base64 --decode\n</code></pre> <p>The bearer token string will be printed in terminal screen.</p> <p></p> <p>Copy it to a text editor, it will be needed after you access the Dashboard UI through a HTTPS call.</p> <p>Note</p> <p>If the last character of the bearer token string is %, it may be a character that denotes the end of the string but is not a part of it. If you copy the bearer string and it is not recognized, try copying it without this ending character %.</p>"},{"location":"kubernetes/Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-5-create-a-separate-terminal-window-for-proxy-access","title":"Step 5 Create a separate terminal window for proxy access\ud83d\udd17","text":"<p>We shall now use a proxy server for Kubernetes API server. The proxy server</p> <ul> <li>handles certificates automatically when accessing Kubernetes API,</li> <li>connects to API extensions or dashboards (like in this article),</li> <li>enables testing of API calls locally before automating them in scripts.</li> </ul> <p>To enable the connection, start a separate terminal window and first set up the config command for that window:</p> <pre><code>export KUBECONFIG=/home/user/k8sdir/config\n</code></pre> <p>Change that address to point to your own config file on your computer.</p> <p>The next command in that new window is:</p> <pre><code>kubectl proxy\n</code></pre> <p>The server is activated on port 8001:</p> <p></p>"},{"location":"kubernetes/Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-3Engines-Cloud-OpenStack-Magnum.html.html#step-6-see-the-dashboard-in-browser","title":"Step 6 See the dashboard in browser\ud83d\udd17","text":"<p>Then enter this address into the browser:</p> <pre><code>http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/\n</code></pre> <p></p> <p>Enter the token, click on Sign In and get the Dashboard UI for the Kubernetes cluster.</p> <p></p> <p>The Kubernetes Dashboard organizes working with the cluster in a visual and interactive way. For instance, click on Nodes on the left sides to see the nodes that the k8s-cluster has.</p>"},{"location":"kubernetes/Using-Dashboard-To-Access-Kubernetes-Cluster-Post-Deployment-On-3Engines-Cloud-OpenStack-Magnum.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You can still use kubectl or alternate with using the Dashboard. Either way, you can</p> <ul> <li>deploy apps on the cluster,</li> <li>access multiple clusters,</li> <li>create load balancers,</li> <li>access applications in the cluster using port forwarding,</li> <li>use Service to access application in a cluster,</li> <li>list container images in the cluster</li> <li>use Services, Deployments and all other resources in a Kubernetes cluster.</li> </ul>"},{"location":"kubernetes/Using-Kubernetes-Ingress-on-3Engines-Cloud-OpenStack-Magnum.html.html","title":"Using Kubernetes Ingress on 3Engines Cloud OpenStack Magnum\ud83d\udd17","text":"<p>The Ingress feature in Kubernetes can be associated with routing the traffic from outside of the cluster to the services within the cluster. With Ingress, multiple Kubernetes services can be exposed using a single Load Balancer.</p> <p>In this article, we will provide insight into how Ingress is implemented on the cloud. We will also demonstrate a practical example of exposing Kubernetes services using Ingress on the cloud. In the end, you will be able to create one or more sites and services running on a Kubernetes cluster. The services you create in this way will</p> <ul> <li>run on the same IP address without need of creating extra LoadBalancer per service and will also</li> <li>automatically enjoy all of the Kubernetes cluster benefits \u2013 reliability, scalability etc.</li> </ul>"},{"location":"kubernetes/Using-Kubernetes-Ingress-on-3Engines-Cloud-OpenStack-Magnum.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Create Magnum Kubernetes cluster with NGINX Ingress enabled</li> <li>Build and expose Nginx and Apache webservers for testing</li> <li>Create Ingress Resource</li> <li>Verify that Ingress can access both testing servers</li> </ul>"},{"location":"kubernetes/Using-Kubernetes-Ingress-on-3Engines-Cloud-OpenStack-Magnum.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Basic knowledge of Kubernetes fundamentals</p> <p>Basic knowledge of Kubernetes fundamentals will come handy: cluster creation, pods, deployments, services and so on.</p> <p>No. 3 Access to kubectl command</p> <p>To install necessary software (if you haven\u2019t done so already), see article How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum.</p> <p>The net result of following instructions in that and the related articles will be</p> <ul> <li>a cluster formed, healthy and ready to be used, as well as</li> <li>enabling access to the cluster from the local machine (i.e. having kubectl command operational).</li> </ul>"},{"location":"kubernetes/Using-Kubernetes-Ingress-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-1-create-a-magnum-kubernetes-cluster-with-nginx-ingress-enabled","title":"Step 1 Create a Magnum Kubernetes cluster with NGINX Ingress enabled\ud83d\udd17","text":"<p>When we create a Kubernetes cluster on the cloud, we can deploy it with a preconfigured ingress setup. This requires minimal setting and is described in this help section: How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum.</p> <p>Such a cluster is deployed with an NGINX ingress controller and the default ingress backend. The role of the controller is to enable the provisioning of the infrastructure e.g. the (virtual) load balancer. The role of the backend is to provide access to this infrastructure in line with the rules defined by the ingress resource (explained later).</p> <p>We can verify the availability of these artifacts by typing the following command:</p> <pre><code>kubectl get pods -n kube-system\n</code></pre> <p>The output should be similar to the one below. We see that there is an ingress controller created, and also an ingress backend, both running as pods on our cluster.</p> <pre><code>kubectl get pods -n kube-system\nNAME READY STATUS RESTARTS AGE\n...\nmagnum-nginx-ingress-controller-zxgj8 1/1 Running 0 65d\nmagnum-nginx-ingress-default-backend-9dfb4c685-8fjdv 1/1 Running 0 83d\n...\n</code></pre> <p>There is also an ingress class available in the default namespace:</p> <pre><code>kubectl get ingressclass\nNAME CONTROLLER PARAMETERS AGE\nnginx k8s.io/ingress-nginx &lt;none&gt; 7m36s\n</code></pre>"},{"location":"kubernetes/Using-Kubernetes-Ingress-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-2-creating-services-for-nginx-and-apache-webserver","title":"Step 2 Creating services for Nginx and Apache webserver\ud83d\udd17","text":"<p>You are now going to build and expose two minimal applications:</p> <ul> <li>Nginx server</li> <li>Apache webserver</li> </ul> <p>They will be both exposed from a single public IP address using a single default ingress Load Balancer. The web pages served from each server will be accessible in the browser with a unified routing scheme. In a similar fashion, one could mix and match applications written in a variety of other technologies.</p> <p>First, let\u2019s create the Nginx server app. For brevity, we use the command line with default settings:</p> <pre><code>kubectl create deployment nginx-web --image=nginx\nkubectl expose deployment nginx-web --type=NodePort --port=80\n</code></pre> <p>Similarly, we create the Apache app:</p> <pre><code>kubectl create deployment apache-web --image=httpd\nkubectl expose deployment apache-web --type=NodePort --port=80\n</code></pre> <p>The above actions result in creating a service for each app, which can be inspected using the below command. Behind each service, there is a deployment and a running pod.</p> <pre><code>kubectl get services\n</code></pre> <p>You should see an output similar to the following:</p> <pre><code>kubectl get services\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\napache-web NodePort 10.254.80.182 &lt;none&gt; 80:32660/TCP 75s\nkubernetes ClusterIP 10.254.0.1 &lt;none&gt; 443/TCP 84d\nnginx-web NodePort 10.254.101.230 &lt;none&gt; 80:32532/TCP 36m\n</code></pre> <p>The services were created with the type NodePort, which is a required type to work with ingress. Therefore, they are not yet exposed under a public IP. The servers are, however, already running and serving their default welcome pages.</p> <p>You could verify that by assigning a floating IP to one of the nodes (see How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud). Then SSH to the node and run the following command:</p> <pre><code>curl &lt;name-of-node&gt;:&lt;port-number&gt;\n</code></pre> <p>E.g. for the scenario above we see:</p> <pre><code>curl ingress-tqwzjwu2lw7p-node-1:32660\n&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt;\n</code></pre>"},{"location":"kubernetes/Using-Kubernetes-Ingress-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-3-create-ingress-resource","title":"Step 3 Create Ingress Resource\ud83d\udd17","text":"<p>To expose application to a public IP address, you will need to define an Ingress Resource. Since both applications will be available from the same IP address, the ingress resource will define the detailed rules of what gets served in which route. In this example, the /apache route will be served from the Apache service, and all other routes will be served by the Nginx service.</p> <p>Note</p> <p>There are multiple ways the routes can be configured, we present here just a fraction of the capability.</p> <p>Create a YAML file called my-ingress-resource.yaml with the following contents:</p> <pre><code>apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n\u00a0\u00a0name: example-ingress\n\u00a0\u00a0annotations:\n\u00a0\u00a0\u00a0\u00a0nginx.ingress.kubernetes.io/rewrite-target: /\nspec:\n\u00a0\u00a0ingressClassName: nginx\n\u00a0\u00a0rules:\n\u00a0\u00a0\u00a0\u00a0- http:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0paths:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- path: /*\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0pathType: Prefix\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0backend:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0service:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0name: nginx-web\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0port:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0number: 80\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- path: /apache\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0pathType: Prefix\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0backend:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0service:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0name: apache-web\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0port:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0number: 80\n</code></pre> <p>And deploy with:</p> <pre><code>kubectl apply -f my-ingress-resource.yaml\n</code></pre> <p>After some time (usually 2 to 5 minutes), verify that the floating IP has been assigned to the ingress:</p> <pre><code>kubectl get ingress\nNAME CLASS HOSTS ADDRESS PORTS AGE\nexample-ingress nginx * 64.225.130.77 80 3m16s\n</code></pre> <p>Note</p> <p>The address 64.225.130.77 is generated randomly and in your case it will be different. Be sure to copy and use the address shown by kubectl get ingress.</p>"},{"location":"kubernetes/Using-Kubernetes-Ingress-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-4-verify-that-it-works","title":"Step 4 Verify that it works\ud83d\udd17","text":"<p>Copy the ingress floating IP in the browser, followed by some example routes. You should see an output similar to the one below. Here is the screenshot for the /apache route:</p> <p></p> <p>This screenshot shows what happens on any other route \u2013 it defaults to Nginx:</p> <p></p>"},{"location":"kubernetes/Using-Kubernetes-Ingress-on-3Engines-Cloud-OpenStack-Magnum.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You now have two of the most popular web servers installed as services within a Kubernetes cluster. Here are some ideas how to use this setup:</p> <p>Create another service on the same server</p> <p>To create another service under the same IP address, repeat the entire procedure with another endpoint name instead of /apache. Don\u2019t forget to add the apropriate entry into the YAML file.</p> <p>Add other endpoints for use with Nginx</p> <p>You can create other endpoints and use Nginx as the basic server instead of Apache.</p> <p>Use images other than nginx and httpd</p> <p>There are many sources of containers on the Internet but the most popular catalog is dockerhub.com. It contains operating systems images with preinstalled software you want to use, which will save you the effort of downloading and testing the installation.</p> <p>Microservices</p> <p>Instead of putting all of the code and data onto one virtual machine, the Kubernetes way is to deploy multiple custom containers. A typical setup would be like this:</p> <ul> <li>pod No. 1 would contain a database, say, MariaDB, as a backend,</li> <li>pod No. 2 could contain PHPMyAdmin for a front end to the database,</li> <li>pod No. 3 could contain installation of WordPress, which is the front end for the site visitor</li> <li>pod No. 4 could contain your proprietary code for WordPress plugins.</li> </ul> <p>Each of these pods will take code from a specialized image. If you want to edit a part of the code, you just update the relevant Docker image on docker hub and redeploy.</p> <p>Use DNS to create a domain name for the server</p> <p>You can use a DNS service to connect a proper domain name to the IP address used in this article. With the addition of a Cert Manager and a free service such as Let\u2019s Encrypt, an ingress will be running HTTPS protocol in a straightforward way.</p>"},{"location":"kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-3Engines-Cloud-OpenStack-Magnum.html.html","title":"Volume-based vs Ephemeral-based Storage for Kubernetes Clusters on 3Engines Cloud OpenStack Magnum\ud83d\udd17","text":"<p>Containers in Kubernetes store files on-disk and if the container crashes, the data will be lost. A new container can replace the old one but the data will not survive. Another problem that appears is when containers running in a pod need to share files.</p> <p>That is why Kubernetes has another type of file storage, called volumes. They can be either persistent or ephemeral, as measured against the lifetime of a pod:</p> <ul> <li>Ephemeral volumes are deleted when the pod is deleted, while</li> <li>Persistent volumes continue to exist even if the pod it is attached to does not exist any more.</li> </ul> <p>The concept of volumes was first popularized by Docker, where it was a directory on disk, or within a container. In 3Engines Cloud OpenStack hosting, the default docker storage is configured to use ephemeral disk of the instance. This can be changed by specifying docker volume size during cluster creation, symbolically like this (see below for the full command to generate a new cluster using \u2013docker-volume-size):</p> <pre><code>openstack coe cluster create --docker-volume-size 50\n</code></pre> <p>This means that a persistent volume of 50 GB will be created and attached to the pod. Using \u2013docker-volume-size is a way to both reserve the space and declare that the storage will be persistent.</p>"},{"location":"kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-3Engines-Cloud-OpenStack-Magnum.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>How to create a cluster when \u2013docker-volume-size is used</li> <li>How to create a pod manifest with emptyDir as volume</li> <li>How to create a pod with that manifest</li> <li>How to execute bash commands in the container</li> <li>How to save a file into persistent storage</li> <li>How to demonstrate that the attached volume is persistent</li> </ul>"},{"location":"kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-3Engines-Cloud-OpenStack-Magnum.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>2 Creating clusters with CLI</p> <p>The article How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum will introduce you to creation of clusters using a command line interface.</p> <p>3 Connect openstack client to the cloud</p> <p>Prepare openstack and magnum clients by executing Step 2 Connect OpenStack and Magnum Clients to Horizon Cloud from article How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon</p> <p>4 Check available quotas</p> <p>Before creating additional cluster check the state of the resources with Horizon commands Computer =&gt; Overview.</p> <p>5 Private and public keys</p> <p>An SSH key-pair created in OpenStack dashboard. To create it, follow this article How to create key pair in OpenStack Dashboard on 3Engines Cloud. You will have created keypair called \u201csshkey\u201d and you will be able to use it for this tutorial as well.</p> <p>6 Types of Volumes</p> <p>Types of volumes are described in the official Kubernetes documentation.</p>"},{"location":"kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-1-create-cluster-using-docker-volume-size","title":"Step 1 - Create Cluster Using \u2013docker-volume-size\ud83d\udd17","text":"<p>You are going to create a new cluster called dockerspace that will use parameter \u2013docker-volume-size using the following command:</p> <pre><code>openstack coe cluster create dockerspace\n --cluster-template k8s-1.23.16-cilium-v1.0.3\n --keypair sshkey\n --master-count 1\n --node-count 2\n --docker-volume-size 50\n --master-flavor eo1.large\n --flavor eo2.large\n</code></pre> <p>After a few minutes the new cluster dockerspace will be created.</p> <p>Click on Container Infra =&gt; Clusters to show the three clusters in the system: authenabled, k8s-cluster and dockerspace.</p> <p></p> <p>Here are their instances (after clicking on Compute =&gt; Instances):</p> <p></p> <p>They will have at least two instances each, one for the master and one for the worker node. dockerspace has three instances as it has two worker nodes, created with flavor eo2.large.</p> <p>So far so good, nothing out of the ordinary. Click on Volumes =&gt; Volumes to show the list of volumes:</p> <p></p> <p>If \u2013docker-volume-size is not turned on, only instances with etcd-volume in their names would appear here, as is the case for clusters authenabled and k8s-cluster. If it is turned on, additional volumes would appear, one for each node. dockerspace will, therefore, have one instance for master and two instances for worker nodes.</p> <p>Note the column Attached. All nodes for dockerspace use /dev/vdb for storage, which is a fact that will be important later on.</p> <p>As specified during creation, docker-volumes have size of 50 GB each.</p> <p>In this step, you have created a new cluster with docker storage turned on and then you verified that the main difference lies in creation of volumes for the cluster.</p>"},{"location":"kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-2-create-pod-manifest","title":"Step 2 - Create Pod Manifest\ud83d\udd17","text":"<p>To create a pod, you need to use a file in yaml format that defines the parameters of the pod. Use command</p> <pre><code>nano redis.yaml\n</code></pre> <p>to create file called redis.yaml and copy the following rows into it:</p> <pre><code>apiVersion: v1\nkind: Pod\nmetadata:\n name: redis\nspec:\n containers:\n - name: redis\n image: redis\n volumeMounts:\n - name: redis-storage\n mountPath: /data/redis\n volumes:\n - name: redis-storage\n emptyDir: {}\n</code></pre> <p>This is how it will look like in the terminal:</p> <p></p> <p>You are creating a Pod, its name will be redis, and it will occupy one container also called redis. The content of that container will be an image called redis.</p> <p>Redis is a well known database and its image is prepared in advance so can be pulled off directly from a repository. If you were implementing your own application, the best way would be to release it through Docker and pull from its repository.</p> <p>New volume is called redis-storage and its directory will be /data/redis. The name of the volume will again be redis-storage and it will be of type emptyDir.</p> <p>An emptyDir volume is initially empty and is first created when a Pod is assigned to a node. It will exist as long as that Pod is running there and if the Pod is removed, the related data in emptyDir will be deleted permanently. However, the data in an emptyDir volume is safe across container crashes.</p> <p>Besides emptyDir, about a dozen other volume types could have been used here: awsElasticBlockStore, azureDisk, cinder and so on.</p> <p>In this step, you have prepared pod manifest with which you will create the pod in the next step.</p>"},{"location":"kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-3-create-a-pod-on-node-0-of-dockerspace","title":"Step 3 - Create a Pod on Node 0 of dockerspace\ud83d\udd17","text":"<p>In this step you will create a new pod on node 0 of dockerspace cluster.</p> <p>First see what pods are available in the cluster:</p> <pre><code>kubectl get pods\n</code></pre> <p>This may produce error line such as this one:</p> <pre><code>The connection to the server localhost:8080 was refused - did you specify the right host or port?\n</code></pre> <p>That will happen in case you did not set up the kubectl parameters as specified in Prerequisites No. 3. You will now set it up for access to dockerstate:</p> <pre><code>mkdir dockerspacedir\n\nopenstack coe cluster config\n--dir dockerspacedir\n--force\n--output-certs\ndockerspace\n</code></pre> <p>First create a new directory, dockerspacedir, where the config file for access to the cluster will reside, then execute the cluster config command. The output will be a line like this:</p> <pre><code>export KUBECONFIG=/Users/duskosavic/3EnginesDocs/dockerspacedir/config\n</code></pre> <p>Copy it and enter again as the command in terminal. That will give kubectl app access to the cluster. Create the pod with this command:</p> <pre><code>kubectl apply -f redis.yaml\n</code></pre> <p>It will read parameters in redis.yaml file and send them to the cluster.</p> <p>Here is the command to access all pods, if any:</p> <pre><code>kubectl get pods\n\nNAME READY STATUS RESTARTS AGE\nredis 0/1 ContainerCreating 0 7s\n</code></pre> <p>Repeat the command after a few seconds and see the difference:</p> <pre><code>kubectl get pods\n\nNAME READY STATUS RESTARTS AGE\nredis 1/1 Running 0 81s\n</code></pre> <p>In this step, you have created a new pod on cluster dockerspace and it is running.</p> <p>In the next step, you will enter the container and start issuing commands just like you would in any other Linux environment.</p>"},{"location":"kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-4-executing-bash-commands-in-the-container","title":"Step 4 - Executing bash Commands in the Container\ud83d\udd17","text":"<p>In this step, you will start bash shell in the container, which in Linux is equivalent to start running the operating system:</p> <pre><code>kubectl exec -it redis -- /bin/bash\n</code></pre> <p>The following listing is a reply:</p> <pre><code>root@redis:/data# df -h\nFilesystem Size Used Avail Use% Mounted on\noverlay 50G 1.4G 49G 3% /\ntmpfs 64M 0 64M 0% /dev\ntmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup\n/dev/vdb 50G 1.4G 49G 3% /data\n/dev/vda4 32G 4.6G 27G 15% /etc/hosts\nshm 64M 0 64M 0% /dev/shm\ntmpfs 3.9G 16K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount\ntmpfs 3.9G 0 3.9G 0% /proc/acpi\ntmpfs 3.9G 0 3.9G 0% /proc/scsi\ntmpfs 3.9G 0 3.9G 0% /sys/firmware\n</code></pre> <p>This is what it would look like in the terminal:</p> <p></p> <p>Note that the prompt changed to</p> <pre><code>root@redis:/data#\n</code></pre> <p>which means you are now issuing commands within the container itself. The pod operates as Fedora 33 and you can use df to see the volumes and their sizes. Command</p> <pre><code>df -h\n</code></pre> <p>lists sizes of files and directories in a human fashion (the usual meaning of parameter -h would be Help, while here it is short for Human).</p> <p>In this step, you have activated the container operating system.</p>"},{"location":"kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-5-saving-a-file-into-persistent-storage","title":"Step 5 - Saving a File Into Persistent Storage\ud83d\udd17","text":"<p>In this step you are going to test the longevity of files on persistent storage. You will first</p> <ul> <li>save a file into the /data/redis directory, then</li> <li>kill the Redis process, which in turn will</li> <li>kill the container; finally, you will</li> <li>re-enter the pod,</li> </ul> <p>where you will find the file intact.</p> <p>Note that dev/vdb has 50 GB in size in the above listing and connect it to the column Attached To in the Volumes =&gt; Volumes listing:</p> <p></p> <p>In its own turn, it is tied to an instance:</p> <p></p> <p>That instance is injected into the container and \u2013 being an independent instance \u2013 acts as persistent storage to the pod.</p> <p>Create a file on the redis container:</p> <pre><code>cd /data/redis/\necho Hello &gt; test-file\n</code></pre> <p>Install software to see the PID number of Redis process in the container</p> <pre><code>apt-get update\napt-get install procps\nps aux\n</code></pre> <p>These are the running processes:</p> <p></p> <p>Take the PID number for Redis process (here it is 1), and eliminate it with command</p> <pre><code>kill 1\n</code></pre> <p>That will first kill the container and then exit its command line.</p> <p>In this step, you have created a file and killed the container that contains the file. This sets up the ground for testing whether the files survive container crash.</p>"},{"location":"kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-3Engines-Cloud-OpenStack-Magnum.html.html#step-6-check-the-file-saved-in-previous-step","title":"Step 6 - Check the File Saved in Previous Step\ud83d\udd17","text":"<p>In this step, you will find out whether the file test-file is still existing.</p> <p>Enter the pod again, activate its bash shell and see whether the file has survived:</p> <pre><code>kubectl exec -it redis -- /bin/bash\ncd redis\nls\n\ntest-file\n</code></pre> <p>Yes, the file test-file is still there. The persistent storage for the pod contains it in path /data/redis:</p> <p></p> <p>In this step, you have entered the pod again and found out that the file has survived intact. That was expected, as volumes of type emptyDir will survive container crashes as long as the pod is existing.</p>"},{"location":"kubernetes/Volume-based-vs-Ephemeral-based-Storage-for-Kubernetes-Clusters-on-3Engines-Cloud-OpenStack-Magnum.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>emptyDir survives container crashes but will disappear when the pod disappears. Other volume types may survive loss of pods better. For instance:</p> <ul> <li>awsElasticBlockStore will have the volume unmounted when the pod is gone; being unmounted and not destroyed, it will persist the data it is containing. This type of volume can have pre-populated data and can share the data among the pods.</li> <li>cephfs can also have pre-populated data and share them among the pods, but can additionally also be mounted by multiple writers at the same time.</li> </ul> <p>Other constraints may also apply. Some of those volume types will require their own servers to be activated first, or that all nodes on which Pods are running need be of the same type and so on. Prerequisite No. 6 will list all types of volumes for Kubernetes clusters so study it and apply to your own Kubernetes apps.</p>"},{"location":"kubernetes/kubernetes.html.html","title":"Kubernetes","text":""},{"location":"kubernetes/kubernetes.html.html#available-documentation","title":"Available Documentation","text":"<ul> <li>How to Create a Kubernetes Cluster Using 3Engines Cloud OpenStack Magnum</li> <li>Default Kubernetes cluster templates in 3Engines Cloud Cloud</li> <li>How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon</li> <li>How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum</li> <li>How To Access Kubernetes Cluster Post Deployment Using Kubectl On 3Engines Cloud OpenStack Magnum</li> <li>Using Dashboard To Access Kubernetes Cluster Post Deployment On 3Engines Cloud OpenStack Magnum</li> <li>How To Create API Server LoadBalancer for Kubernetes Cluster on 3Engines Cloud OpenStack Magnum</li> <li>Creating Additional Nodegroups in Kubernetes Cluster on 3Engines Cloud OpenStack Magnum</li> <li>Autoscaling Kubernetes Cluster Resources on 3Engines Cloud OpenStack Magnum</li> <li>Volume-based vs Ephemeral-based Storage for Kubernetes Clusters on 3Engines Cloud OpenStack Magnum</li> <li>Backup of Kubernetes Cluster using Velero</li> <li>Using Kubernetes Ingress on 3Engines Cloud OpenStack Magnum</li> <li>Deploying Helm Charts on Magnum Kubernetes Clusters on 3Engines Cloud Cloud</li> <li>Deploying HTTPS Services on Magnum Kubernetes in 3Engines Cloud Cloud</li> <li>Installing JupyterHub on Magnum Kubernetes Cluster in 3Engines Cloud Cloud</li> <li>Install and run Argo Workflows on 3Engines Cloud Magnum Kubernetes</li> <li>Installing HashiCorp Vault on 3Engines Cloud Magnum</li> <li>HTTP Request-based Autoscaling on K8S using Prometheus and Keda on 3Engines Cloud</li> <li>Create and access NFS server from Kubernetes on 3Engines Cloud</li> <li>Deploy Keycloak on Kubernetes with a sample app on 3Engines Cloud</li> <li>Install and run Dask on a Kubernetes cluster in 3Engines Cloud cloud</li> <li>Install and run NooBaa on Kubernetes cluster in single- and multicloud-environment on 3Engines Cloud</li> <li>Private container registries with Harbor on 3Engines Cloud Kubernetes</li> <li>Deploying vGPU workloads on 3Engines Cloud Kubernetes</li> <li>Kubernetes cluster observability with Prometheus and Grafana on 3Engines Cloud</li> <li>Enable Kubeapps app launcher on 3Engines Cloud Magnum Kubernetes cluster</li> <li>Install GitLab on 3Engines Cloud Kubernetes</li> <li>Sealed Secrets on 3Engines Cloud Kubernetes</li> <li>CI/CD pipelines with GitLab on 3Engines Cloud Kubernetes - building a Docker image</li> <li>How to create Kubernetes cluster using Terraform on 3Engines Cloud</li> <li>GitOps with Argo CD on 3Engines Cloud Kubernetes</li> <li>Configuring IP Whitelisting for OpenStack Load Balancer using Horizon and CLI on 3Engines Cloud</li> <li>Configuring IP Whitelisting for OpenStack Load Balancer using Terraform on 3Engines Cloud</li> <li>Implementing IP Whitelisting for Load Balancers with Security Groups on 3Engines Cloud</li> <li>How to install Rancher RKE2 Kubernetes on 3Engines Cloud</li> <li>Automatic Kubernetes cluster upgrade on 3Engines Cloud OpenStack Magnum</li> </ul>"},{"location":"networking/Cannot-access-VM-with-SSH-or-PING-on-3Engines-Cloud.html.html","title":"Cannot access VM with SSH or PING on 3Engines Cloud\ud83d\udd17","text":"<p>Before contacting the Support, please make sure that the port 22 (SSH) is allowed in the Security Groups associated with your instance. If this is configured correctly, please try to perform a soft or hard reboot of your VM. Lack of connection could have been caused by the expired DHCP. Rebooting will allow you to get a fresh DHCP session and everything should work fine.</p>"},{"location":"networking/Cannot-ping-VM-on-3Engines-Cloud.html.html","title":"Cannot ping VM on 3Engines Cloud\ud83d\udd17","text":"<p>If you have problems with access to your VM - ping is not responding. Try the following:</p> <p>install the packages net-tools (to have the ifconfig command) and arping:</p> <p>in CentOS:</p> <pre><code>sudo yum install net-tools arping\n</code></pre> <p>in Ubuntu:</p> <pre><code>sudo apt install net-tools arping\n</code></pre> <p>check the name of the interface connected to private network:</p> <pre><code>ifconfig\n</code></pre> <p>based on the response, find the number of the interface of 192.168.x.x (eth or ens) <p>after that invoke the following commands:</p> <p>in CentOS:</p> <pre><code>sudo arping -U -c 2 -I eth&lt;number&gt; $(ip -4 a show dev eth&lt;number&gt; | sed -n 's/.*inet \\([0-9\\.]\\+\\).*/\\1/p')\n</code></pre> <p>in Ubuntu:</p> <pre><code>sudo arping -U -c 2 -I ens&lt;number&gt; $(ip -4 a show dev ens&lt;number&gt; | sed -n 's/.*inet \\([0-9\\.]\\+\\).*/\\1/p')\n</code></pre> <p>Next ping your external ip address and check if it helped.</p>"},{"location":"networking/Generating-a-SSH-keypair-in-Linux-on-3Engines-Cloud.html.html","title":"Generating an SSH keypair in Linux on 3Engines Cloud\ud83d\udd17","text":"<p>In order to generate an SSH keypair in Linux, we recommend using the command ssh-keygen.</p> <p>If system does not see this packet installed, install the latest updates:</p> Ubuntu and Debian family <p>``` sudo apt-get update &amp;&amp; apt-get install openssh-client</p> <p>```</p> CentOS and Red Hat <p>``` sudo yum install openssh-clients</p> <p>```</p> <p>After that, use the following command in terminal:</p> <pre><code>ssh-keygen\n</code></pre> <p>with additional flags:</p> <code>-t</code> rsa authentication key type <code>-b</code> 4096 bit length, 2048 if not specified. Available values: 1024, 2048, 4096. The greater the value, the more complicated the key will be. <code>-C</code> user@server name for identification at the end of the file <code>-f</code> ~/.ssh/keys/keylocation location of folder with ssh keys <code>-N</code> passphrase, can be omitted if user prefers connecting without additional key security <p></p> <p>Application will ask for the name of the key. Press Enter for defaults:</p> <ul> <li>id_rsa for private and</li> <li>id_rsa.pub for public key and passphrase (pressing Enter ignores it).</li> </ul> <p></p> <p>Next, ssh-keygen will show</p> <ul> <li>location, where the keys are saved,</li> <li>fingerprint of keypair and certain</li> <li>semi-graphic image as expression of randomness in generating unique key.</li> </ul> <p></p> <p>To avoid problem with rejecting files due to too open permissions, navigate to the folder containing both keys and enter command:</p> <pre><code>chmod 600 id_rsa &amp;&amp; chmod 600 id_rsa.pub\n</code></pre>"},{"location":"networking/How-can-I-access-my-VMs-using-names-instead-of-IP-addresses-on-3Engines-Cloud.html.html","title":"How can I access my VMs using names instead of IP addresses on 3Engines Cloud\ud83d\udd17","text":"<p>The VMs are seen simultaneously in several networks, at least in your \u201cprivate\u201d LAN and in the public Internet. By default the public addresses (Floating IPs, 185.48.x.x) have no associated names. You may assign such names from your DNS domain or you may request a name from us (as an additional service). The names provided by us have the following format:</p> <pre><code>computer_name.users.creodias.eu\n</code></pre> <p>where computer_name is chosen by you.</p> <p>If you need the name just to access the machine from your office workstation, the simplest way is to add its address and friendly name to /etc/hosts.</p> <p>The VMs in a given project share a common \u201cprivate\u201d network \u2013 by default it is 10.0.0.0/24. You may create additional private networks with any addresses you like. However, they will not be equipped with DNS. If the machines are expected to recognize each other by their names, either /etc/hosts needs to be created and copied to all machines, or a private DNS may be run on one of them. Moreover, although the addresses are dynamically assigned, they are constant which means they do not change from the moment of creation to the moment of deletion of your machine.</p>"},{"location":"networking/How-can-I-open-new-ports-port-80-for-http-for-my-service-or-instance-on-3Engines-Cloud.html.html","title":"How can I open new ports for http for my service or instance on 3Engines Cloud\ud83d\udd17","text":"<p>To open a new port for a service on an instance, click Project -&gt; Network -&gt; Security Groups and click \u201cCreate Security Group\u201d.</p> <p>By default, in the newly created group there will two Egress (outgoing) rules - for IPv4 and IPv6.</p> <p>You need to create a new Ingress (incoming) rule that should look like this:</p> <pre><code>Ingress IPv4 TCP 80 (HTTP) 0.0.0.0/0\n</code></pre> <p>After creating a new Security Group you have to add it to your instance.</p> <p>To do so, simply click Project -&gt; Compute -&gt; Instances, then select \u201cEdit Security Groups\u201d and add it by clicking the \u201c+\u201d button.</p> <p></p>"},{"location":"networking/How-is-my-VM-visible-in-the-internet-with-no-Floating-IP-attached-on-3Engines-Cloud.html.html","title":"How is my VM visible in the internet with no Floating IP attached on 3Engines Cloud\ud83d\udd17","text":"<p>This article is written for clarification how an instance without a floating IP address would respond if we were to search for it it from an external machine.</p>"},{"location":"networking/How-is-my-VM-visible-in-the-internet-with-no-Floating-IP-attached-on-3Engines-Cloud.html.html#how-to-find-out-what-ip-address-is-attached-to-vm","title":"How to find out what IP address is attached to VM?\ud83d\udd17","text":"<p>In Linux you can easily see your IP by executing command:</p> <pre><code>curl ifconfig.me\n</code></pre> <p>In Windows, the easiest way is visiting website that shows us our public and private IP address, for example: whatismyipaddress.com/</p>"},{"location":"networking/How-is-my-VM-visible-in-the-internet-with-no-Floating-IP-attached-on-3Engines-Cloud.html.html#is-my-vm-visible-from-internet-without-floating-ip-assigned","title":"Is my VM visible from Internet without floating IP assigned?\ud83d\udd17","text":"<p>No. If we don\u2019t associate a Floating IP to the VM, it won\u2019t be routable from the internet. By setting an IP address using the process mentioned above, we will only see the interface address of the router attached to the private network (by default 192.168.0.1)</p>"},{"location":"networking/How-is-my-VM-visible-in-the-internet-with-no-Floating-IP-attached-on-3Engines-Cloud.html.html#can-i-send-data-from-my-vm-without-a-floating-ip","title":"Can I send data from my VM without a floating IP?\ud83d\udd17","text":"<p>Yes. If you want to send data from your VM to an external server, you should also allow receiving packets from 192.168.0.1 in your firewall configuration.</p>"},{"location":"networking/How-is-my-VM-visible-in-the-internet-with-no-Floating-IP-attached-on-3Engines-Cloud.html.html#is-my-vm-accessible-from-the-outside-without-floating-ip","title":"Is my VM accessible from the outside without floating IP?\ud83d\udd17","text":"<p>No. If a VM needs to be accessible from the Internet, a floating IP address must be attached to the instance. For more information on assigning Floating IPs to the instance, please see the following article: How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud.</p>"},{"location":"networking/How-to-Add-or-Remove-Floating-IPs-to-your-VM-on-3Engines-Cloud.html.html","title":"How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud\ud83d\udd17","text":"<p>In order to make your VM accessible from the Internet, you need to use Floating IPs. Floating IPs in OpenStack are public IP addresses assigned to your Virtual Machines. Assignment of a Floating IP allows you (if you have your Security Groups set properly) to host services like SSH or HTTP over the Internet.</p>"},{"location":"networking/How-to-Add-or-Remove-Floating-IPs-to-your-VM-on-3Engines-Cloud.html.html#how-to-assign-a-floating-ip-to-your-vm","title":"How to assign a Floating IP to your VM?\ud83d\udd17","text":"<p>In the Instances tab in Horizon, click the dropdown menu next to your VM and choose Associate Floating IP.</p> <p></p> <p>You will be shown a window like this:</p> <p></p> <p>You may choose an address from the dropdown menu, but if it\u2019s empty, you need to allocate an address first. Click the + icon on the right.</p> <p></p> <p>Click Allocate IP.</p> <p>Warning</p> <p>Please always choose the external network!</p> <p></p> <p>Select your newly allocated IP address and click Associate.</p> <p></p> <p>Note</p> <p>The IP address should be associated with a local address from the 192.168.x.x subnet. If you have a 10.x.x.x address change it to an 192.168.x.x address.</p> <p>Click Associate.</p> <p>Note</p> <p>The VM\u2019s communicate between themselves trough an internal network 192.168.x.x so if you are connecting from one Virtual Machine to another you should use private addresses. If you try to connect your VM to the wrong network you will be notified by the following message:</p> <p></p> <p>You now have a public IP assigned to your instance. It is visible in the Instances menu:</p> <p></p> <p>You can now connect to your Virtual Machine trough SSH or RDP from the Internet.</p>"},{"location":"networking/How-to-Add-or-Remove-Floating-IPs-to-your-VM-on-3Engines-Cloud.html.html#how-to-disassociate-a-floating-ip","title":"How to disassociate a Floating IP?\ud83d\udd17","text":"<p>If you no longer need a public IP address you may disassociate it from your VM. Click Dissasociate Floating IP from the dropdown menu:</p> <p></p>"},{"location":"networking/How-to-Add-or-Remove-Floating-IPs-to-your-VM-on-3Engines-Cloud.html.html#how-to-release-a-floating-ip-return-it-to-the-pool","title":"How to release a Floating IP (return it to the pool)?\ud83d\udd17","text":"<p>Floating IPs (just like any other OpenStack resource) have their cost when kept reserved and not used.</p> <p>If you don\u2019t want to keep your Floating IP\u2019s reserved for your project you may release them to the OpenStack pool for other users which will also reduce the costs of your project.</p> <p>Go to Project \u2192 Network \u2192 Floating IPs</p> <p></p> <p>For the address that is not in use, the Release Floating IP option will be available. Click it to release the IP address.</p>"},{"location":"networking/How-to-Import-SSH-Public-Key-to-OpenStack-Horizon-on-3Engines-Cloud.html.html","title":"How to import SSH public key to OpenStack Horizon on 3Engines Cloud\ud83d\udd17","text":"<p>If you already have an SSH key pair on your computer, you can import your public key to the Horizon dashboard. Then, you will be able to use that imported key when launching a new instance.</p> <p>By importing it directly to Horizon, you will eliminate the need to use tools like ssh-copy-id or manually edit the authorized_keys file. Also, your key will be available in OpenStack CLI.</p> <p>Warning</p> <p>After uploading your public key, you will not be able to apply it to an already created virtual machine. If you need to add a key to an existing VM, please follow this article instead: How to add SSH key from Horizon web console on 3Engines Cloud.</p> <p>Note</p> <p>You can have multiple SSH keys uploaded to your Horizon dashboard. You can then use them for different tasks.</p>"},{"location":"networking/How-to-Import-SSH-Public-Key-to-OpenStack-Horizon-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Preparation</li> <li>Importing a Key</li> </ul>"},{"location":"networking/How-to-Import-SSH-Public-Key-to-OpenStack-Horizon-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Generated SSH key pair</p> <p>You need a generated SSH key pair on your computer. If you do not have one yet, you can create it by following one of these articles:</p>"},{"location":"networking/How-to-add-SSH-key-from-Horizon-web-console-on-3Engines-Cloud.html.html","title":"How to add SSH key from Horizon web console on 3Engines Cloud\ud83d\udd17","text":"<p>While using web console on your VM, you may face situation when you will have to enter SSH public key.</p> <p>Unfortunately, copy/paste functionality in not supported by our console. For adding a key to an existing instance, the easiest method would be getting the key via curl.</p> <p>For instance you may go to https://pastebin.com/ and put your public key there (you can set if and how long content is visible to others and so on)</p> <p></p> <p>copy URL of raw pastebin content (for obtaining a raw content, click on \u201cRaw\u201d icon),</p> <p></p> <p></p> <p>and issue the command from inside of instance:</p> <pre><code>curl &lt;pastebin url here&gt; &gt; mykey.txt\n</code></pre> <p></p> <p>After downloading the file, you may check if your key is saved correctly using cat command:</p> <pre><code>cat mykey.txt\n</code></pre> <p></p> <p>Please note that the key must be put into /home/eouser/.ssh/authorized_keys, because you can ssh to your instance as eouser, but not as eoconsole. So once you are eoconsole user and get the key as described above, you should use:</p> <pre><code>cat mykey.txt | sudo tee -a /home/eouser/.ssh/authorized_keys\n</code></pre>"},{"location":"networking/How-to-connect-to-your-virtual-machine-via-SSH-in-Linux-on-3Engines-Cloud.html.html","title":"How to connect to your virtual machine via SSH in Linux on 3Engines Cloud\ud83d\udd17","text":"<p>1. Prerequisites:</p> <p>1.1. Private and public keys have been created. The key files were saved on the local disk of the VM you wish to connect to. It is recommended to put the keys in the ~/.ssh folder.</p> <p>1.2. During the VM setup, the generated key we want to use was assigned.</p> <p>For example, when you create an SSH key named \u201ctestkey\u201d in the Horizon dashboard, its name will appear next to your VM.</p> <p></p> <p>2. Connecting to a virtual machine via SSH:</p> <p>2.1. If your virtual machine has already been assigned a Floating IP (the instances menu next to your virtual machine lists the IP address) you can proceed to the next step. If not, please follow the guide: How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud.</p> <p>2.2. Go to the ~/.ssh folder where your SSH keys were saved to. Start your terminal (right click and click \u201cOpen in Terminal\u201d).</p> <p>2.3. Change the permissions of the private key file. In the case of the file named id_rsa, type:</p> <pre><code>sudo chmod 600 id_rsa\n</code></pre> <p>Enter your password and confirm.</p> <p>2.4. Once you have completed all of the steps above, you can log in. Let us assume that your generated and assigned Floating IP address in this case is 64.225.132.99. Execute the following command in the terminal:</p> <pre><code>ssh [email\u00a0protected]\n</code></pre> <p>2.5. The username in the terminal will change to eouser. This means that the SSH connection was successful.</p> <p></p>"},{"location":"networking/How-to-create-a-network-with-router-in-Horizon-Dashboard-on-3Engines-Cloud.html.html","title":"How to create a network with router in Horizon Dashboard on 3Engines Cloud\ud83d\udd17","text":"<p>When you create a new project in Horizon, its content is empty. You have to manually configure your private network. In order to complete this task, please follow those steps.</p> <ol> <li>Log in to your OpenStack dashboard and choose Network tab, then choose Networks sub-label.</li> </ol> <p></p> <ol> <li>Click on the \u201cCreate Network\u201d button.</li> </ol> <p></p> <ol> <li>Define your Network Name and tick two checkboxes: Enable Admin State and Create Subnet. Go to Next.</li> </ol> <p></p> <ol> <li>Define your Subnet name. Assign a valid network address with mask presented as a prefix. (This number determines how many bytes are being destined for network address)</li> </ol> <p>Define Gateway IP for your Router. Normally it\u2019s the first available address in the subnet.</p> <p>Go to Next.</p> <p></p> <ol> <li>In Subnet Details you are able to turn on DHCP server, assign DNS servers to your network and set up basic routing. In the end, confirm the process with \u201cCreate\u201d button.</li> </ol> <p></p> <ol> <li>Click on the Routers tab.</li> </ol> <p></p> <ol> <li>Click on the \u201cCreate Router\u201d button.</li> </ol> <p></p> <ol> <li>Name your device and assign the only available network \u2192 external. Finish by choosing \u201cCreate Router\u201d blue button.</li> </ol> <p></p> <ol> <li>Click on your newly created Router (e.g called \u201cRouter_1\u201d).</li> </ol> <p></p> <ol> <li>Choose Interfaces.</li> </ol> <p></p> <ol> <li>Choose + Add Interface button.</li> </ol> <p></p> <ol> <li>Assign a proper subnet and fill in IP Address. (It\u2019s the gateway for our network). Submit the process.</li> </ol> <p></p> <ol> <li>The internal interface has been attached to the router.</li> </ol> <p></p>"},{"location":"networking/How-to-run-and-configure-Firewall-as-a-service-and-VPN-as-a-service-on-3Engines-Cloud.html.html","title":"How to run and configure Firewall as a service and VPN as a service on 3Engines Cloud\ud83d\udd17","text":"<p>Note</p> <p>This guide provides a sample process for configuring VPN as a service. It should not be considered the only way to configure this solution.</p> <p>To start the VPN as a service, it is necessary to configure and start the Firewall as a service. The sequence of steps will be described below.</p> <p>Creating FWAAS infrastruture</p> <p>Creating and configuring local networks</p> <ol> <li>Log in to your OpenStack dashboard and choose Network tab, then choose Networks sub-label.</li> </ol> <p></p> <ol> <li>Click on the \u201cCreate Network\u201d button.</li> </ol> <p></p> <ol> <li>Define your Network Name as \u201cGateway\u201d and go to Subnet Tab.</li> <li>Define your Subnet name as \u201cGateway_subnet\u201d. Network address: 10.100.100.0/24 and gateway IP 10.100.100.1.</li> </ol> <p></p> <ol> <li>In Subnet Details keep Enable DHCP marked. Rest of fields leave blank and click Create button.</li> </ol> <p></p> <ol> <li> <p>Repeat this procedure from points 2-5 using different data:</p> </li> <li> <p>Network Name: \u201cInternal\u201d</p> </li> <li>Subnet Name: \u201cInternal_subnet\u201d</li> <li>Network Address: 10.200.200.0/24</li> <li> <p>Gateway IP: 10.200.200.1</p> </li> <li> <p>Click on the Create Router button.</p> </li> </ol> <p></p> <ol> <li>Name your device as for example \u201cRouter_Fwaas\u201d. Choose external network in External Network tab. Click Create Router.</li> </ol> <p></p> <ol> <li>Click on your newly created Router (e.g called \u201cRouter_Fwaas\u201d).</li> </ol> <p></p> <ol> <li>Choose Interfaces and Add Interface button.</li> </ol> <p></p> <ol> <li>Choose from Subnet menu the Gateway subnet and click Submit button.</li> </ol> <p></p> <ol> <li>Choosing Network -&gt; Network Topology the network topology should looks like this.</li> </ol> <p></p> <p>Creating and configuring the VM with installed Firewall client</p> <ol> <li>Open Compute -&gt; Instances tab and choose Launch instance.</li> </ol> <p></p> <ol> <li>Name the VM instance (for example Firewall_VM) and go to Source tab.</li> </ol> <p></p> <ol> <li>Find opnsense image and add it to your VM. Go to Flavor tab.</li> </ol> <p></p> <ol> <li> <p>Choose the specification of your VM. Prequisities to launch Firewall:</p> </li> <li> <p>Minimal: CPU 1 Core, 2 GB RAM memory, 8GB SSD drive (eo1.xmedium flavor)</p> </li> <li>Optimal: CPU 2 Core, 4 GB RAM memory, 16GB SSD drive (eo1.medium flavor)</li> </ol> <p>Go to Networks tab.</p> <p></p> <ol> <li> <p>Add created local networks in correct order:</p> </li> <li> <p>Internal network</p> </li> <li>Gateway network</li> </ol> <p></p> <ol> <li>Delete all security groups and open Configuration tab.</li> </ol> <p></p> <ol> <li>Paste configuration script presented below:</li> </ol> <pre><code>#cloud-config\n\nruncmd:\n- |\n address=$(curl http://169.254.169.254/latest/meta-data/local-ipv4)\n first=$(echo \"$address\" | /usr/bin/cut -d'.' -f1)\n second=$(echo \"$address\" | /usr/bin/cut -d'.' -f2)\n third=$(echo \"$address\" | /usr/bin/cut -d'.' -f3)\n sed -i '' \"s/&lt;ipaddr&gt;192.168.*.*&lt;\\/ipaddr&gt;/&lt;ipaddr&gt;$first.$second.$third.1&lt;\\/ipaddr&gt;/\" /conf/config.xml\n sed -i '' '/&lt;disablefilter&gt;enabled&lt;\\/disablefilter&gt;/g' /conf/config.xml\n reboot\n</code></pre> <p></p> <p>Choose launch instance.</p> <ol> <li>After creating VM click its name in instances tab.</li> </ol> <p></p> <ol> <li>Choose interfaces tab and click edit port next to each port.</li> </ol> <p></p> <ol> <li>Disable port security and click update.</li> </ol> <p></p> <ol> <li>Go to Network -&gt; Floating IPs menu and choose Allocate IP to project.</li> </ol> <p></p> <ol> <li>Choose Allocate IP.</li> </ol> <p></p> <ol> <li>Click Associate next to newly generated Floating IP and assign it to your Firewall_VM port.</li> </ol> <p></p> <ol> <li>After creation the Firewall VM LAN address vtnet0 should be 10.200.200.1 (you can check it using console on Horizon).</li> </ol> <p></p> <p>Configuring VPN service</p> <p>Prerequisities: For configuring your VPN server using Graphical Interface you need a VM with preinstalled GUI (for example MINT, XFCE etc.) and connected to Internal network. Click here for instructions how to install GUI on Ubuntu 20.04 VM: How to Use GUI in Linux VM on 3Engines Cloud and access it From Local Linux Computer.</p> <ol> <li> <p>In your default WEB browser open IP 10.200.200.1.</p> </li> <li> <p>User: root</p> </li> <li>Password: opnsense</li> </ol> <p></p> <ol> <li>Click VPN -&gt; OpenVPN -&gt; Servers on the left. At the bottom of new page click the wand icon of Use a wizard to setup a new server.</li> </ol> <p></p> <ol> <li>On the Authentication Type Selection page, ensure Type of Server is set to Local User Access and click Next.</li> </ol> <p></p> <ol> <li> <p>Set the fields in the following order:</p> </li> <li> <p>Decriptive name: Name of your VPN Server Certificate (eg. OPNsense-CA)</p> </li> <li>Key lenght: 2048 bit</li> <li>Lifetime: Lifetime in days of your VPN Server certificate (eg. 825)</li> <li>Country Code: Two-letter ISO country code</li> <li>State or Province: Full State of Province name, not abbreviated</li> <li>City: City or other locality name</li> <li>Organization: Organization name, often the Company or Group name</li> <li>Email: E-mail address for the Certificate contact</li> </ol> <p></p> <ol> <li>Click Add new CA to continue and Add new Certificate on the next page.</li> </ol> <p></p> <ol> <li>On the Add a Server Certificate page, set the Descriptive name to server, leave the Key length at 2048 bit and set the Lifetime to 3650.</li> </ol> <p></p> <ol> <li>Click Create new Certificate to continue.</li> <li> <p>The next page should be Server Setup, set the following:</p> </li> <li> <p>Set Interface to WAN</p> </li> <li>Ensure Protocol is UDP and Port is 1194</li> <li>Set a description, for example \u201cVPN Server\u201d</li> <li>Change DH Parameters Length to 4096</li> <li>Change Encryption Algorithm to \u2018AES-256-CBC (256 bit key, 128 bit block)\u2019</li> <li>Change Auth Digest Algorithm to \u2018SHA512 (512-bit)\u2019</li> <li>In the IPv4 Tunnel Network field, enter \u201810.0.8.0/24\u2019</li> <li>To allow access to machines on the local network, enter your local IP range in the Local Network setting. It should be 10.200.200.0/24</li> <li>Set the Compression to \u2018No Preference\u2019</li> <li>Set DNS Server 1 to 10.0.8.1</li> </ol> <p>All other options can be left. Click Next.</p> <p></p> <ol> <li>On the Firewall Rule Configuration, tick both the Firewall Rule and OpenVPN rule checkboxes and click Next.</li> </ol> <p></p> <ol> <li>Now your VPN server is succesfully created.</li> </ol> <p></p> <p>User Setup</p> <p>Creating new User</p> <ol> <li>Click System -&gt; Access -&gt; Users on the left and choose Add icon on the left of Users page.</li> </ol> <p></p> <ol> <li>Enter a Username, Password, and tick the box Click to create a user certificate further down. Fill any other fields you would like, but they are not required. Choose click to create a user certificate.</li> </ol> <p></p> <ol> <li>You will be taken to a Certificates page. Select \u2018Create an internal Certificate\u2019 in the Method drop down box. The page will re-arrange itself.</li> <li>Ensure Certificate Authority is the name we created during the wizard which should be \u2018OPNsense-CA\u2019, and Type is \u2018Client Certificate\u2019.</li> </ol> <p></p> <ol> <li>Change Lifetime (days) of the certificate and click Save.</li> </ol> <p></p> <ol> <li>You will be taken back to the Create User page, User Certificates should now have an entry, click Save down the bottom again.</li> </ol> <p>Setting UP Open VPN Client For connect to your VPN server you need a VPN client. You can use one of the reccomended software like OpenVPN or Viscocity. Below you can find the insctructions how to use Open VPN client for connecting to VPN Server.</p> <p>Export Connection from OPNsense</p> <ol> <li>Click VPN -&gt; OpenVPN -&gt; Client Export on the left. Change hostname to Floating IP assigned to your VPN Server.</li> </ol> <p></p> <ol> <li>Click the cloud icon next to your username or server name to download certificate and configuration files.</li> </ol> <p></p> <ol> <li>Unpack downloaded configuration files and find Open VPN config file.</li> </ol> <p>For Windows PC\u2019s:</p> <ol> <li>Download and install the newest version of Open VPN. You can find it here: https://openvpn.net/community-downloads/</li> <li>Save all the connfiguration files in C:/Program Files/OpenVPN/config and try to connect using pre-configured credentials.</li> </ol> <p>For Linux (Ubuntu) PC\u2019s</p> <ol> <li>Open the Terminal in folder which contains configuration files.</li> <li>Use commands presented below:</li> </ol> <pre><code>sudo apt update\nsudo nmcli connection import type openvpn file nameofyourovpnconffile.ovpn\n</code></pre> <ol> <li>Try to connect to VPN using Ubuntu configuration bar (right up corner) and apropriate credentials.</li> </ol>"},{"location":"networking/networking.html.html","title":"Networking","text":""},{"location":"networking/networking.html.html#available-documentation","title":"Available Documentation","text":"<ul> <li>How can I access my VMs using names instead of IP addresses on 3Engines Cloud</li> <li>How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud</li> <li>Cannot access VM with SSH or PING on 3Engines Cloud</li> <li>Cannot ping VM on 3Engines Cloud</li> <li>How to connect to your virtual machine via SSH in Linux on 3Engines Cloud</li> <li>How to create a network with router in Horizon Dashboard on 3Engines Cloud</li> <li>How can I open new ports for http for my service or instance on 3Engines Cloud</li> <li>Generating an SSH keypair in Linux on 3Engines Cloud</li> <li>How to add SSH key from Horizon web console on 3Engines Cloud</li> <li>How is my VM visible in the internet with no Floating IP attached on 3Engines Cloud</li> <li>How to run and configure Firewall as a service and VPN as a service on 3Engines Cloud</li> <li>How to import SSH public key to OpenStack Horizon on 3Engines Cloud</li> </ul>"},{"location":"openstackcli/How-To-Create-and-Configure-New-Project-on-3Engines-Cloud-Cloud.html.html","title":"How to Create and Configure New Openstack Project Through Horizon on 3Engines Cloud Cloud\ud83d\udd17","text":""},{"location":"openstackcli/How-To-Create-and-Configure-New-Project-on-3Engines-Cloud-Cloud.html.html#default-elements-of-the-account","title":"Default elements of the account\ud83d\udd17","text":"<p>When you first create your account at 3Engines Cloud hosting, default values for the account will be applied. Among others, you will</p> <ul> <li>become owner of a tenant manager account and</li> <li>have a default project created along with</li> <li>three networks and</li> <li>two security groups.</li> </ul> <p>In OpenStack terminology, the role of tenant manager is to be an administrator of the account. As a tenant manager, you can</p> <ul> <li>use the account directly but can also</li> <li>create other users od the account.</li> </ul> <p>Before users can start using the account, you have to create project and attach other elements to it: users, groups, roles and so on. Then you invite a user to the organization and they log in with their own login details. There is a catch, though:</p> <ul> <li>new project that the tenant manager creates will not have automatically generated external network while</li> <li>the allow_ping_ssh_icmp_rdp security group will not be generated either.</li> </ul> <p>In other words, the users of the account won\u2019t have access to the Internet.</p> <p>In this article you will see how to overcome these problems.</p>"},{"location":"openstackcli/How-To-Create-and-Configure-New-Project-on-3Engines-Cloud-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Introduction to OpenStack Projects</p> <p>The article What is an OpenStack project on 3Engines Cloud will define basic elements of an OpenStack project \u2013 groups, projects, roles and so on.</p> <p>No. 3 Security groups</p> <p>The article How to use Security Groups in Horizon on 3Engines Cloud describes how to create and edit security groups. They enable ports through which the virtual machine communicates with other networks, in particular, with the Internet at large.</p> <p>No. 4 Create network with router</p> <p>Here is how to create a network with router:</p> <p>How to create a network with router in Horizon Dashboard on 3Engines Cloud</p>"},{"location":"openstackcli/How-To-Create-and-Configure-New-Project-on-3Engines-Cloud-Cloud.html.html#default-values-in-the-tenant-manager-account","title":"Default values in the tenant manager account\ud83d\udd17","text":"<p>Click on Network -&gt; Networks and verify the presence of the three default networks. Since the cloud contains number 341 in its name, the networks will have it too: cloud_00341_3 and eodata_00341_.</p> <p></p> <p>Note</p> <p>This number, 341, will vary from cloud to cloud and you can see it in the upper left corner of the browser window.</p> <p>In particular, two networks that come as default have their names starting with:</p> <ul> <li>cloud_00, the network for internal communication of all the objects in the account</li> <li>eodata_00, the network for accessing the Earth Observation Data (images from satellites for you to use)</li> <li>The third network is called external and has access to the outside world \u2013 the Internet, at large.</li> </ul> <p>Click on option Network -&gt; Security Groups to verify the presence of two default security groups:</p> <p></p> <p>The default security groups are:</p> <ul> <li>default, the default security group</li> <li>allow_ping_ssh_icmp_rdp, to allow access for the usual types of traffic: for the Internet, from Windows to the cloud and so on.</li> </ul> <p>The former shuts down any communication to the virtual machine for security reasons while the latter opens up only the ports for normal use. In this case, it will be for traffic of types ping, ssh, icmp and rdp. Please see Prerequisite No. 3 for definition of those terms.</p>"},{"location":"openstackcli/How-To-Create-and-Configure-New-Project-on-3Engines-Cloud-Cloud.html.html#create-a-new-project","title":"Create a New Project\ud83d\udd17","text":"<p>A project can contain users, groups and their roles so the first step is to define a project and later add users, groups and roles.</p>"},{"location":"openstackcli/How-To-Create-and-Configure-New-Project-on-3Engines-Cloud-Cloud.html.html#step-1-create-project","title":"Step 1 Create Project\ud83d\udd17","text":"<p>Choose Identity \u2192 Projects menu on the left side of the screen.</p> <p></p> <p>Click on Create Project button.</p> <p>Complete the project name (this is obligatory) and make sure that checkbox Enable is ticked on so that your project becomes active.</p> <p></p> <p>Next switch on the Project Members tab.</p> <p> </p> <p>You can add users to project by clicking on \u201c+\u201d icon from the user list.</p> <p>It is possible to grant privileges to all of the members in project by selecting proper role from the drop-down menu.</p> <p></p> <p>Role member is the most basic role which has access to most parts of the cloud. Roles starting with k8s- are accessing Kubernetes clusters so disregard them if you are not using Kubernetes clusters in your project. For security reasons user types heat_stack_user and admin should not be used unless you know what you are doing.</p> <p>The last tab, Project Groups, allows you to add groups of users with the same privileges.</p> <p>To finish setting up a new project, click on the blue Create Project button.</p> <p>If you have set up the configuration properly, new project should appear in the list. Note the Project ID column as it will be needed in the next step.</p> <p></p> <p>You now have two projects at your disposal:</p> <p></p> <p>To activate the new project, testproject, click on its name. The name of the active project will be available in the upper left corner:</p> <p></p> <p>As mentioned earlier, your new project will not have access to the external network. To verify, choose project, select Network -&gt; Networks and you will see that the new project has no networks defined.</p> <p></p> <p>For security groups, the situation is similar: the default one is present, but the allow_ping_ssh_icmp_rdp security group is missing.</p>"},{"location":"openstackcli/How-To-Create-and-Configure-New-Project-on-3Engines-Cloud-Cloud.html.html#step-2-add-external-network-to-the-project","title":"Step 2 Add external network to the project\ud83d\udd17","text":"<p>To add external network to such project you must contact Customer Support by creating a ticket. Instructions how to do that are in article Helpdesk and Support. The ticket should include project ID from the Projects list. To get the project ID, click on Project -&gt; API Access</p> <p></p> <p>and then on button View Credentials on the right side. A window with user name, user ID, project name, project ID and authentication URL will appear.</p> <p></p> <p>Copy those values and put them into the email message in Helpdesk window. Click on button Create Request to send it:</p> <p></p>"},{"location":"openstackcli/How-To-Create-and-Configure-New-Project-on-3Engines-Cloud-Cloud.html.html#step-3-add-security-group-to-the-project","title":"Step 3 Add security group to the project\ud83d\udd17","text":"<p>When the support answers, you will see external network in the network list. Then create security group and enable ports 22 (SSH) and 3389 (RDP) following the instructions in Prerequisite No. 3. Your security group should look like this:</p> <p></p> <p>Port 22 will enable SSH access to the instance, while port 3389 will enable access through RDP. SSH and RDP are protocols for accessing a virtual machine from local Linux or Windows machines, respectively.</p>"},{"location":"openstackcli/How-To-Create-and-Configure-New-Project-on-3Engines-Cloud-Cloud.html.html#step-4-create-network-with-router","title":"Step 4 Create network with router\ud83d\udd17","text":"<p>The last step is to create a network with a router. See Prerequisite No. 4.</p>"},{"location":"openstackcli/How-To-Create-and-Configure-New-Project-on-3Engines-Cloud-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Your testproject is ready for creating new instances. For example, see articles:</p> <p>How to create a Linux VM and access it from Windows desktop on 3Engines Cloud</p> <p>How to create a Linux VM and access it from Linux command line on 3Engines Cloud</p> <p>If you want a new user to have access to testproject, the following articles will come handy:</p> <p>Inviting new user to your Organization.</p> <p>Removing user from Organization.</p> <p>/accountmanagement/Accounts-and-Projects-Management.</p>"},{"location":"openstackcli/How-to-access-object-storage-using-OpenStack-CLI-on-3Engines-Cloud.html.html","title":"How to access object storage using OpenStack CLI on 3Engines Cloud\ud83d\udd17","text":"<p>Cloud computing offers the ability to handle large chunks of data, directly on the remote server. OpenStack module Swift was created expressly to enable access to unstructured data that can grow without bounds, with the following design goals in mind :</p> <ul> <li>durability,</li> <li>scalability,</li> <li>concurrency across the entire data set,</li> <li>all while keeping the API simple.</li> </ul> <p>Swift is installed as an independent module but on the syntax level, it is used through the parameters of openstack command.</p>"},{"location":"openstackcli/How-to-access-object-storage-using-OpenStack-CLI-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>How to install Swift</li> <li>How to connect Swift to OpenStack cloud</li> <li>Basic openstack operations with containers</li> <li>Basic openstack operations with objects</li> </ul>"},{"location":"openstackcli/How-to-access-object-storage-using-OpenStack-CLI-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account, available at https://portal.3Engines.com/. If you want to follow up with articles about object storage on Horizon, you will this link too: https://horizon.3Engines.com.</p> <p>No. 2 Install or activate openstack command</p> <p>To be able to connect to the cloud, openstack command must be operational. If not installed already, use article How to install OpenStackClient for Linux on 3Engines Cloud</p> <p>No. 3 Authenticate to OpenStack using application credentials</p> <p>Then you have to authenticate your account to the cloud. The usual way is to activate openstack command using an RC file for on- or two-factor authentication. That will not work in case of Swift module. It is authenticated with application credentials, as explained in article</p> <p>How to generate or use Application Credentials via CLI on 3Engines Cloud.</p> <p>No. 4 Familiarity with object storage on 3Engines Cloud OpenStack</p> <p>This article is explaining the basics, using the Horizon interface:</p> <p>How to use Object Storage on 3Engines Cloud.</p> <p>Swift can be understood as the CLI tool for accessing object storage under OpenStack.</p> <p>No. 5 Python installed</p> <p>The following articles contain sections on how to install Python:</p>"},{"location":"openstackcli/How-to-backup-an-instance-and-download-it-to-the-desktop-on-3Engines-Cloud.html.html","title":"How to Backup an Instance and Download it to the Desktop on 3Engines Cloud OpenStack Hosting\ud83d\udd17","text":"<p>First, you will need to setup the OpenStack CLI environment on the computer to which you want to download your instance. Depending on the operating system you are using, follow one of the links below:</p> <p>How to install OpenStackClient for Linux on 3Engines Cloud</p> <p>How to install OpenStackClient GitBash for Windows on 3Engines Cloud</p> <p>Assume that you are</p> <ul> <li>logged into your 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com and that</li> <li>you have created an instance called vm-john-01.</li> </ul> <p></p>"},{"location":"openstackcli/How-to-backup-an-instance-and-download-it-to-the-desktop-on-3Engines-Cloud.html.html#list-instances-in-your-project","title":"List Instances in Your Project\ud83d\udd17","text":"<p>List instances in your project using the following CLI command:</p> <pre><code>user@ubuntu:~$ openstack server list\n</code></pre> <p>This will be the result:</p> ID Name Status Networks Image Flavor 72170eb7-cee4-41a3-beea-c7d208446130 vm-john-01 ACTIVE test_network=192.168.2.172, 64.225.128.53 Ubuntu 20.04 LTS eo1.medium"},{"location":"openstackcli/How-to-backup-an-instance-and-download-it-to-the-desktop-on-3Engines-Cloud.html.html#create-a-backup","title":"Create a Backup\ud83d\udd17","text":"<p>Now you can create a backup from command line interface (CLI) in the terminal (replace 72170eb7-cee4-41a3-beea-c7d208446130 with the ID of your instance):</p> <pre><code>user@ubuntu:~$ openstack server backup create --name backup-01 72170eb7-cee4-41a3-beea-c7d208446130\n</code></pre> <p>Note</p> <p>You can also add the \u2013rotate parameter to the above command if you want to have control over the number of stored backups: <pre><code>user@ubuntu:~$ openstack server backup create --name backup-01 --rotate 2 72170eb7-cee4-41a3-beea-c7d208446130\n</code></pre> <p>You can see the backup \u201cbackup-01\u201d in https://horizon.3Engines.com/project/images</p> <p></p> <p>or with CLI command:</p> <pre><code>user@ubuntu:~$ openstack image list --private\n</code></pre> <p>The result would be:</p> <pre><code>+--------------------------------------+-----------+--------+\n| ID | Name | Status |\n+--------------------------------------+-----------+--------+\n| 747d720d-a6f4-4554-bf56-16183e5fb7fa | backup-01 | active |\n+--------------------------------------+-----------+--------+\n</code></pre>"},{"location":"openstackcli/How-to-backup-an-instance-and-download-it-to-the-desktop-on-3Engines-Cloud.html.html#download-the-backup-file","title":"Download the Backup File\ud83d\udd17","text":"<p>Disk image is a raw copy of the hard drive of your virtual machine. You can download it using the following command (replace 72170eb7-cee4-41a3-beea-c7d208446130 with the ID of your disk image):</p> <pre><code>user@ubuntu:~$ openstack image save --file backup-on-the-desktop 747d720d-a6f4-4554-bf56-16183e5fb7fa\n</code></pre>"},{"location":"openstackcli/How-to-backup-an-instance-and-download-it-to-the-desktop-on-3Engines-Cloud.html.html#upload-the-backed-up-file","title":"Upload the Backed Up File\ud83d\udd17","text":"<p>After that, you can upload backup of your file using the Horizon dashboard:</p> <p>Go to Project \u2192 Compute \u2192 Images.</p> <p></p> <p>Click on \u201cCreate Image\u201d.</p> <p></p> <p>On this panel you must insert image name, choose backup file and backup format. Next click on \u201cCreate Image\u201d.</p> <p></p> <p>You can also use CLI commands to upload the backup file:</p> <pre><code>user@ubuntu:~$ openstack image create --file path/to/backup &lt;backup_name&gt;\n</code></pre>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html","title":"How to create a set of VMs using OpenStack Heat Orchestration on 3Engines Cloud\ud83d\udd17","text":"<p>Heat is an OpenStack component responsible for Orchestration. Its purpose is to deliver automation engine and optimize processes.</p> <p>Heat receives commands through templates which are text files in yaml format. A template describes the entire infrastructure that you want to deploy. The deployed environment is called a stack and can consist of any combination out of the 102 different resources that are available in OpenStack.</p>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Typical parts of a Heat template</li> <li>Basic template for using Heat</li> <li>How to get data for Heat Template</li> <li>Using Heat with CLI</li> <li>Using Heat with GUI</li> <li>More advanced template for Heat</li> </ul>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Installed Python and its virtualenv</p> <p>If you want to use Heat through CLI commands, Python must be installed and its virtual environment activated. See article How to install Python virtualenv or virtualenvwrapper on 3Engines Cloud.</p> <p>If you have never installed one of the OpenStack clients, see :How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon.</p>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html#always-use-the-latest-value-of-image-id","title":"Always use the latest value of image id\ud83d\udd17","text":"<p>From time to time, the default images of operating systems in the 3Engines Cloud cloud are upgraded to the new versions. As a consequence, their image id will change. Let\u2019s say that the image id for Ubuntu 20.04 LTS was 574fe1db-8099-4db4-a543-9e89526d20ae at the time of writing of this article. While working through the article, you would normally take the current value of image id, and would use it to replace 574fe1db-8099-4db4-a543-9e89526d20ae throughout the text.</p> <p>Now, suppose you wanted to automate processes under OpenStack, perhaps using Heat, Terraform, Ansible or any other tool for OpenStack automation; if you use the value of 574fe1db-8099-4db4-a543-9e89526d20ae for image id, it would remain hardcoded and once this value gets changed during the upgrade, the automated process may stop to execute.</p> <p>Warning</p> <p>Make sure that your automation code is using the current value of an OS image id, not the hardcoded one.</p>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html#basic-template-for-using-heat","title":"Basic template for using Heat\ud83d\udd17","text":"<p>Using the following snippet, you can create one virtual machine, booted from ephemeral disk. Create a text file called template.yaml with your favorite text editor and save it to disk:</p> <pre><code>heat_template_version: 2015-04-30\n\nresources:\n instance:\n type: OS::Nova::Server\n properties:\n flavor: eo1.xsmall\n image: Ubuntu 18.04 LTS\n networks:\n - network: &lt;type in your network name here, e.g. cloud_00341_3&gt;\n - network: &lt;type in your network name here, e.g. eodata_00341_3&gt;\n key_name: &lt;type in your ssh key name&gt;\n security_groups:\n - allow_ping_ssh_icmp_rdp\n - default\n</code></pre> <p>Important</p> <p>Yaml format does not allow for tabs, you must enter spaces instead.</p>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html#typical-parts-of-a-heat-template","title":"Typical parts of a Heat template\ud83d\udd17","text":"<p>Here are the basic elements of a Heat template:</p> heat_template_version The exact version of heat template. Each of them varies in many ways (including support for various modules, additional parameters, customization etc). See Orchestration -&gt; Template Versions. resources Entry to commence providing particular components for deployment. instance Name of resource (you can type in anything on your own). type Definition of an OpenStack component (a comprehensive list is under Orchestration -&gt; Resource Types) properties Required parameters for deploying a component. <p>Note</p> <p>Your account will normally have a network starting with cloud_ but it may also have other networks. In the following examples, we use network called eodata_ as an example of an additional network that can be added while creating and using Heat templates.</p>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html#how-to-get-data-for-heat-template","title":"How to get data for Heat template\ud83d\udd17","text":"<p>Templates need data for images, flavor networks, key pairs, security groups and so on. You would normally know all these elements in advance, or you could \u201clook around\u201d at various parts of OpenStack environment:</p> flavor Compute -&gt; Instances -&gt; Launch Instance -&gt; Flavor image Compute -&gt; Instances -&gt; Launch Instance -&gt; Source networks Network -&gt; Networks -&gt; cloud and eodata networks for your domain key_name Compute -&gt; Key Pairs security_groups Network -&gt; Security Groups <p>You can work with Heat in two ways:</p> <ul> <li>through Command Line Interface (CLI), with python-heatclient preinstalled and</li> <li>interactively, through Horizon commands.</li> </ul>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html#using-heat-with-cli","title":"Using Heat with CLI\ud83d\udd17","text":"<p>Assuming you have</p> <ul> <li>installed Python and</li> <li>activated its working environment</li> </ul> <p>as explained in Prerequisite No. 2, run pip command to install python-heatclient:</p> <pre><code>pip install python-heatclient\n</code></pre> <p>To run a prepared template in order to deploy a stack, this is what a general command would look like:</p> <pre><code>openstack stack create -t template.yaml &lt;stackname&gt;\n</code></pre> <p>where -t assigns template for deployment and defines name for the stack. <p>As a result, a new Stack would be executed and a new instance would be created. For example, the command</p> <pre><code>openstack stack create -t template.yaml heat-test2\n</code></pre> <p>would produce the following output:</p> <p></p> <p>In Horizon, this is what you would see under Orchestration -&gt; Stacks:</p> <p></p> <p>A new instance would be created under Compute -&gt; Instances:</p> <p></p>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html#using-heat-with-gui","title":"Using Heat with GUI\ud83d\udd17","text":"<p>Log in to the Horizon dashboard, choose Orchestration and then Stacks tab:</p> <p></p> <p>Navigate to the right part of the screen, click on button and bring Select Template window to the screen.</p> <p>Enroll Template Source selector and choose a particular file, Direct Input or URL to your template.</p> <p></p> <p>Enter the text of the template you copied from file template.yaml directly into the form:</p> <p></p> <p>Provide a name of your stack and your openstack password:</p> <p></p> <p>As a result, a new Heat template will have been created:</p> <p></p> <p>By creating a stack in Horizon you have also executed that template. The result is that a new instance has been created \u2013 see it under Compute -&gt; Instances:</p> <p></p> <p>We end up with two stacks and two new instances, once using a CLI and the other time, using a GUI.</p>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html#create-four-vms-using-an-advanced-heat-template","title":"Create four VMs using an advanced Heat template\ud83d\udd17","text":"<p>In the following example we will attach parameters and then create ResourceGroup with counter, a VM booted from Cinder Volume and several predefined outputs. In parameter count we state that we want to generate 4 instances at once, which will yield us the automation that we wanted in the first place. Save the following code as template4.yaml:</p> <pre><code>heat_template_version: 2015-04-30\n\nparameters:\n key_name:\n type: string\n label: sshkey\n description: SSH key to be used for all instances\n default: &lt;insert your ssh key name here&gt;\n image_id:\n type: string\n description: Image to be used. Check all available options in Horizon dashboard or, with CLI, use openstack image list command.\n default: Ubuntu 18.04 LTS\n private_net_id:\n type: string\n description: ID/Name of private network\n default: &lt;insert your network name here, e.g. cloud_00341_3&gt;\n\nresources:\n Group_of_VMs:\n type: OS::Heat::ResourceGroup\n properties:\n count: 4\n resource_def:\n type: OS::Nova::Server\n properties:\n name: my_vm%index%\n flavor: eo1.xsmall\n image: { get_param: image_id }\n networks:\n - network: { get_param: private_net_id }\n key_name: { get_param: key_name }\n security_groups:\n - allow_ping_ssh_icmp_rdp\n - default\n\n VOL_FAQ:\n type: OS::Cinder::Volume\n properties:\n name: vol\n size: 20\n image : { get_param: image_id }\n\n With_volume:\n type: OS::Nova::Server\n properties:\n flavor: eo1.xsmall\n block_device_mapping: [{\"volume_size\": 20, \"volume_id\": { get_resource: VOL_FAQ }, \"delete_on_termination\": False, \"device_name\": \"/dev/vda\" }]\n networks:\n - network: { get_param: private_net_id }\n key_name: { get_param: key_name }\n security_groups:\n - allow_ping_ssh_icmp_rdp\n - default\n image : { get_param: image_id }\n\noutputs:\n SERVER_DETAILS:\n description: Shows details of all virtual servers.\n value: { get_attr: [ Group_of_VMs, show ] }\n</code></pre> <p>The first step is to create a real volume (called VOL_FAQ) and the second is to create a VM (With_volume).</p> <p>Explanation</p> Parameters <p>Here you provide default values (key_name, image_id, private_net_id in this case) and later inject them into resource definitions. The syntax is:</p> <pre><code>{get param: param_name }\n</code></pre> ResourceGroup Component being used for repeating deployment, e.g two identical VM\u2019s. Count Defines a variable for iterative operations. resource_def Starting statement for defining group resources. %index% This is how you add iterative number to the VM name, increasing values starting from 0. block_device_mapping Property to define a bootable Cinder volume for instance. outputs Additional information concerning deployed elements of the stack. In this case it returns a \u201cshow\u201d attribute output. You can examine this kind of information by using openstack stack output list. Available attributes for every component can be found here.. <p>Execute the template with the following command:</p> <pre><code>openstack stack create -t template4.yaml four\n</code></pre> <p>The name of the stack will be four. This is the result in CLI window:</p> <p></p> <p>Under Compute -&gt; Instance you would see five new instances created:</p> <p></p> <p>Four of them have names my_vm0, my_vm1, my_vm1 and my_vm1, as defined in line name: my_vm%index% in the template. The fifth is called four-With_volume-lrejw222kfvi. Its name starts the same as the name of the template itself while the rest is automatically generated.</p>"},{"location":"openstackcli/How-to-create-a-set-of-VMs-using-OpenStack-Heat-Orchestration-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You can write your own templates as yaml files or you can use option Orchestration -&gt; Template Generator, which will enable you to enter components in an interactive way:</p> <p></p> <p>Further explanation of this option is out of scope of this article.</p>"},{"location":"openstackcli/How-to-create-instance-snapshot-using-OpenStack-CLI-on-3Engines-Cloud.html.html","title":"How to create instance snapshot using OpenStack CLI on 3Engines Cloud\ud83d\udd17","text":"<p>In this article, you will learn how to create instance snapshot on 3Engines Cloud cloud, using OpenStack CLI.</p> <p>Instance snapshots allow you to archive the state of the virtual machine. You can, then, use them for</p> <ul> <li>backup,</li> <li>migration between clouds</li> <li>disaster recovery and/or</li> <li>cloning environments for testing or development.</li> </ul> <p>We cover both types of storage for instances, ephemeral and persistent.</p>"},{"location":"openstackcli/How-to-create-instance-snapshot-using-OpenStack-CLI-on-3Engines-Cloud.html.html#the-plan","title":"The plan\ud83d\udd17","text":"<p>In reality, you will be using the procedures described in this article with the already existing instances.</p> <p>However, to get a clear grasp of the process, while following this article you are going to create two new instances, one with ephemeral and the other with persistent type of storage. Let their names be instance-which-uses-ephemeral and instance-which-uses-volume. You will create an instance snapshot for each of them.</p> <p>If you are only interested in one of these types of instances, you can follow its respective section of this text.</p> <p>It goes without saying that after following a section about one type of virtual machine you can clean up the resources you created to, say, save costs.</p> <p>Or you can keep them and use them to create an instance out of it using one of articles mentioned in What To Do Next.</p>"},{"location":"openstackcli/How-to-create-instance-snapshot-using-OpenStack-CLI-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":""},{"location":"openstackcli/How-to-create-instance-snapshot-using-OpenStack-CLI-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Ephemeral storage vs. persistent storage</p> <p>Please see article Ephemeral vs Persistent storage option Create New Volume on 3Engines Cloud to understand the basic difference between ephemeral and persistent types of storage in OpenStack.</p> <p>No. 3 Instance with ephemeral storage</p> <p>You need a virtual machine hosted on 3Engines Cloud cloud.</p> <p>You can create an instance with ephemeral storage by following this article: How to create a VM using the OpenStack CLI client on 3Engines Cloud cloud</p> <p>The actual command used to create an instance from that article was</p> <pre><code>openstack server create \\\n--image Debian-custom-upload \\\n--flavor eo1.small \\\n--key-name ssh-key \\\n--network cloud_00734_1 \\\n--network eodata \\\n--security-group default \\\n--security-group allow_ping_ssh_icmp_rdp \\\nTest-Debian\n</code></pre> <p>In the examples in this article, we are using a default image Ubuntu 22.04 LTS.</p> <p>With ephemeral storage, only one new instance is created.</p> <p>No. 4 Instance with persistent storage</p> <p>When creating an instance with persistent storage, you just add one new option to the above command; the option is \u2013boot-from-volume followed by a</p> <ul> <li>space and the</li> <li>desired size of the new volume in gigabytes.</li> </ul> <p>Make sure to enter the amount of storage sufficient for your needs.</p> <p>You can also look at storage size available with your chosen virtual machine flavor for guidance (openstack flavor list command, column Disk)</p> <p>For instance, if you want your boot volume to have 16 GB, add the following:</p> <pre><code>--boot-from-volume 16\n</code></pre> <p>The complete command would, then, look like this:</p> <pre><code>openstack server create \\\n--image Debian-custom-upload \\\n--flavor eo1.small \\\n--key-name ssh-key \\\n--network cloud_00734_1 \\\n--network eodata_00734_1 \\\n--security-group default \\\n--security-group allow_ping_ssh_icmp_rdp \\\n--boot-from-volume 16 \\\nTest-Debian\n</code></pre> <p>In the examples in this article, we are using a default image Ubuntu 22.04 LTS.</p> <p>With persistent storage, one instance and one volume are created:</p> <ul> <li>a special kind of instance (with no ephemeral storage) and</li> <li>the volume that is attached to that instance.</li> </ul> <p>The instance will boot from the volume that was attached during the creation of instance.</p> <p>Otherwise, an instance can have two or more volumes attached to it, however, only one will serve as its boot drive.</p> <p>No. 5 How to delete resources</p> <p>If you want to learn how to delete instances, snapshots, volumes and other OpenStack objects, please have a look at the following articles:</p> <p>/networking/How-to-correctly-delete-all-the-resources-in-the-project-via-OpenStack-commandline-Clients-on-3Engines-Cloud.</p> <p>How to create or delete volume snapshot on 3Engines Cloud.</p> <p>No. 6 OpenStack CLI client</p> <p>You need to have OpenStack CLI client installed. One of the following articles should help you:</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html","title":"How to install OpenStackClient GitBash for Windows on 3Engines Cloud\ud83d\udd17","text":"<p>In this tutorial, you start with a standard Windows installation, then install the OpenStack CLI client and end up connecting to your project on 3Engines Cloud cloud.</p> <p>For another way of installing OpenStack CLI on Windows, see article How to install OpenStackClient on Windows using Windows Subsystem for Linux on 3Engines Cloud OpenStack Hosting. However:</p> <ul> <li>using Git Bash is simpler than using Windows Subsystem for Linux and is</li> <li>providing a more straightforward access to your local file system.</li> </ul>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Installing the required software (Python 3, PIP, Git for Windows and the appropriate compilers)</li> <li>Creating an isolated Python environment for installing the OpenStack CLI client</li> <li>Installing the OpenStack CLI client</li> <li>Authenticating the OpenStack CLI client to the cloud</li> <li>Executing a simple command to test whether the process was successful</li> </ul>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Computer running Microsoft Windows</p> <p>Your computer or virtual machine must be running Microsoft Windows 10 version 1909 or Windows 11. Also, Windows Server 2016, 2019 and 2022 are supported. The reason for that are the requirements of Microsoft Visual Studio.</p> <p>Obtaining a valid license for Microsoft C++ Build Tools and other software mentioned here, is outside of scope of this text.</p> <p>Installing Microsoft C++ Build Tools, as described in this article, might require more than 10 GiB of hard drive space. The exact amount is subject to change. During this process, make sure that you do not run out of storage.</p> <p>No. 3 Basic knowledge of the Linux terminal</p> <p>You will need basic knowledge of Linux command line.</p> <p>No. 4 RC file downloaded</p> <p>You need to download the RC file from your Horizon dashboard. To do that, follow section How to download the RC file of the following article: /gettingstarted/How-to-activate-OpenStack-CLI-access-to-3Engines-Cloud-cloud-using-one-or-two-factor-authentication.</p> <p>This file must be present on the machine on which you intend to use the OpenStack CLI client.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#step-1-download-and-install-python","title":"Step 1: Download and Install Python\ud83d\udd17","text":"<p>There are two ways of obtaining Python on 3Engines Cloud cloud:</p> <ul> <li>It may come preinstalled on virtual machines that were created using one of the default Windows images.</li> <li>You may download and install the latest version from the Internet.</li> </ul> <p>The latter solution will either install Python anew or update the existing installation, so it is still a recommended step.</p> <p>If you are going to use your own computer, follow the instructions below if you don\u2019t have Python installed.</p> <p>Start your Internet browser and open https://www.python.org</p> <p>Hover your mouse over the Downloads button and choose Windows from the menu that has just appeared.</p> <p>Pick up the latest version for Python.</p> <p>Download it and run that .exe file. Make sure to have options at the bottom of the window selected and click Customize installation.</p> <p></p> <p>In the next screen, select all the Optional Features:</p> <p></p> <p>Click Next.</p> <p>In the screen Advanced Options, select what is selected on the screenshot below and make sure that the installation location is in Program Files directory:</p> <p></p> <p>Click Install and wait until the installation is completed:</p> <p></p> <p>On the last screen, click option Disable path length limit:</p> <p></p> <p>The button Disable path length limit should disappear. Click Close.</p> <p>Open Windows command prompt and execute python command in it to check whether the installation was successful. You should see output similar to this:</p> <p></p> <p>Close the command prompt.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#step-2-install-git-bash-and-pip","title":"Step 2: Install Git Bash and pip\ud83d\udd17","text":"<p>Git Bash for Windows is a set of programs that emulates Linux terminal, allowing you to use common Linux commands ls, source, mv and others.</p> <p>It is part of Git for Windows. Download that software from https://gitforwindows.org and execute the installer.</p> <p>During the installation keep the default options selected.</p> <p>After installation, a Git Bash entry should appear in Start menu:</p> <p></p> <p>Other programs in the suite are Git CMD, Git GUI etc.</p> <p>The installation of Python and its suite of programs requires you to additionally install pip and update the necessary PythonSSL certificates.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#step-3-install-pip-and-update-the-pythonssl-certificates","title":"Step 3: Install pip and update the PythonSSL certificates\ud83d\udd17","text":"<p>pip is a tool for managing and installing Python images.</p> <p>Download get-pip.py from https://bootstrap.pypa.io/get-pip.py. If it opens in your browser as plain text document, right click anywhere on it in the browser and use the Save as\u2026 or similar option to save it on your computer.</p> <p></p> <p>Run the script by opening it in Python. If Python is not the default piece of software used for opening .py files on your system, right-click the file and use the Open with\u2026 or similar option and choose Python there.</p> <p>It will install pip. The installation process can be monitored in a terminal window.</p> <p>In order to test whether the installation was successful, use Start menu to start Git Bash and type the following command:</p> <pre><code>pip -V\n</code></pre> <p>Your output should contain the version of pip that you have:</p> <p></p> <p>Now update PythonSSL certificates that you have on your computer:</p> <pre><code>pip install -U requests[security]\n</code></pre> <p></p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#step-4-install-microsoft-c-build-tools","title":"Step 4: Install Microsoft C++ Build Tools\ud83d\udd17","text":"<p>Microsoft C++ Build Tools are required to install the OpenStack CLI client using pip on Windows.</p> <p>Enter the following website: https://visualstudio.microsoft.com/visual-cpp-build-tools/</p> <p>Click Download Build Tools. Execute the downloaded .exe file.</p> <p>During installation, choose the Desktop development with C++ (which was a correct option at the time of writing):</p> <p></p> <p>Click Install. Wait until the installation process is completed.</p> <p>Warning</p> <p>The installation process might take a long time.</p> <p>Reboot your computer if the installer prompts you to do so.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#step-5-install-virtualenv-and-the-openstack-cli-client","title":"Step 5: Install virtualenv and the OpenStack CLI client\ud83d\udd17","text":"<p>virtualenv allows you to perform Python operations in an isolated environment. In order to install it, open Git Bash if you previously closed it or rebooted your computer, and execute the following command:</p> <pre><code>pip install virtualenv\n</code></pre> <p>With cd command enter the directory in which you want to store the environment in which the OpenStack CLI client will be running. You will need it later on, so make it easily accessible, for example:</p> <pre><code>cd C:/Users/Administrator\n</code></pre> <p>Execute the following command to create the virtual environment openstack_cli which will be used for the OpenStack CLI client:</p> <pre><code>virtualenv openstack_cli\n</code></pre> <p>Note</p> <p>You must supply the name of the environment (here, openstack_cli) but what it will be is completely up to you.</p> <p>A directory called openstack_cli should appear in the current folder. It will contain files needed for your isolated environment. In order to enter that environment, run source command on the activate file which is in the Scripts folder found in the folder with your virtual environment:</p> <pre><code>source openstack_cli/Scripts/activate\n</code></pre> <p>From now on, the name of your isolated environment - openstack_cli - will be in brackets before each command prompt, indicating that you are inside it.</p> <p></p> <p>Closing the terminal and reopening will drop you from that environment.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#how-git-bash-terminal-commands-differ-from-those-in-windows","title":"How Git Bash terminal commands differ from those in Windows\ud83d\udd17","text":"<p>In GitBash, there are two ways of inserting text from clipboard:</p> <ul> <li>key combination Shift+Ins or</li> <li>right-click the Git Bash window and select Paste from the displayed menu.</li> </ul> <p>The usual Windows commands such as CTRL+V or CTRL+Shift+V, won\u2019t work in Git Bash window.</p> <p>Git Bash emulates UNIX-based systems so while you are in it, use forward slashes and not the typical Windows backward slashes.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#step-6-download-and-prepare-jq","title":"Step 6: Download and prepare jq\ud83d\udd17","text":"<p>To authenticate the OpenStack CLI client in the next step, a program called jq will be needed. It is a JSON preprocessor, running from command line. To install, navigate to https://jqlang.github.io/jq/download/ using your Internet browser.</p> <p>Download the latest 64-bit executable version of jq for Windows.</p> <p>A file with .exe extension should be downloaded. Rename it to simply jq (make sure that it still has the .exe extension).</p> <p>Navigate to its location using the cd command in Git Bash. Do it in a similar way as you would on a Linux command line. Execute the following command:</p> <pre><code>mv jq.exe /usr/bin\n</code></pre> <p>This should allow you to use jq with the RC file easily.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#step-7-install-and-configure-the-openstack-cli-client","title":"Step 7: Install and configure the OpenStack CLI client\ud83d\udd17","text":"<p>Without leaving Git Bash, while still inside the openstack_cli virtual environment, execute the following command:</p> <pre><code>pip install python-openstackclient\n</code></pre> <p>Wait until the process is completed. As the result, you will be able to run openstack command on terminal prompt. It, however, won\u2019t have access to the 3Engines Cloud cloud, so the next step is to authenticate to the cloud.</p> <p>Navigate to the location of the RC file which you downloaded while following Prerequisite No. 4 and execute the source command on it. It could look like this (if the name of your RC file is main-openrc.sh):</p> <pre><code>source main-openrc.sh\n</code></pre> <p>After that, you will receive the prompt for your password. Enter it and press Enter (while typing the password, no characters should appear).</p> <p>If your account has two-factor authentication enabled, you will get a prompt for the six-digit code. Enter it and press Enter.</p> <p>Here is what the two step process of authentication looks like for an RC file called main-openrc.sh:</p> <p></p> <p>On the screenshot above, the username and project name were hidden for privacy reasons.</p> <p>In order to test whether the OpenStack CLI client works, list virtual machines you currently operate. The command is:</p> <pre><code>openstack server list\n</code></pre> <p>The output should contain a table containing virtual machines from your project.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#reentering-the-isolated-python-environment","title":"Reentering the Isolated Python Environment\ud83d\udd17","text":"<p>To run the OpenStack CLI client again, say, after you might have closed the Git Bash window, or have had shut down or restarted Windows, you would have to repeat the same commands you entered above (replace C:/Users/Administrator with the path containing your openstack_cli folder).</p> <pre><code>cd C:/Users/Administrator\nsource openstack_cli/Scripts/activate\n</code></pre> <p>After that, execute the source command on your RC file in the same way as previously.</p> <p>You can also create a batch file to automate reentering the Python environment.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>The article How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon will give you another procedure to install CLI and connect it to the cloud. It also contains several examples of using the CLI commands.</p> <p>Other articles of interest:</p> <p>How to Create and Configure New Openstack Project Through Horizon on 3Engines Cloud Cloud</p> <p>How to create a set of VMs using OpenStack Heat Orchestration on 3Engines Cloud</p> <p>Using CLI interface for Kubernetes clusters:</p> <p>How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum</p> <p>Also see</p> <p>How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-for-Linux-on-3Engines-Cloud.html.html","title":"How to install OpenStackClient for Linux on 3Engines Cloud\ud83d\udd17","text":"<p>The OpenStack CLI client allows you to manage OpenStack environments using the command line interface. Its functions include:</p> <ul> <li>Creating, starting, shutting down, shelving, deleting, rebooting virtual machines</li> <li>Assigning a floating IP to your virtual machine</li> <li>Listing available resources, including volumes, virtual machines and floating IPs</li> </ul> <p>You can also automate these operations using scripts.</p> <p>This article covers two methods of installing this piece of software on Ubuntu. The first method should be more convenient and sufficient for most needs. The second method is for advanced use cases, such as:</p> <ul> <li>keeping multiple versions of the OpenStack CLI client ready to use on the same computer or</li> <li>needing more advanced features than what Ubuntu packages provide and</li> <li>having to use the OpenStack CLI client on a Linux distribution which does not support the installation method described in the first method.</li> </ul>"},{"location":"openstackcli/How-to-install-OpenStackClient-for-Linux-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Linux installed on your computer</p> <p>You need to have Linux installed on your local computer or a virtual machine. This article was written for Ubuntu 22.04 LTS and Python 3. Instructions for other Linux distributions might be different.</p> <p>If you choose a virtual machine, you can run it yourself, or it can be, say, a virtual machine running on 3Engines Cloud cloud. If you choose this latter option, the following articles might be of help for you:</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-on-Windows-using-Windows-Subsystem-for-Linux-on-3Engines-Cloud-OpenStack-Hosting.html.html","title":"How to install OpenStackClient on Windows using Windows Subsystem for Linux on 3Engines Cloud OpenStack Hosting\ud83d\udd17","text":"<p>In this tutorial, you will control your OpenStack environment in a deeper and more precise way using the CLI (Command Line Interface). Of course, you can use the Horizon GUI (Graphical User Interface) running in your browser, but the CLI includes additional features like the ability to use scripts for more automated management of your environment.</p> <p>The instructions for installing Windows Subsystem for Linux are based on the official Windows documentation found at https://learn.microsoft.com/en-us/windows/wsl/.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-on-Windows-using-Windows-Subsystem-for-Linux-on-3Engines-Cloud-OpenStack-Hosting.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Installing Windows Subsystem for Linux on Microsoft Windows</li> <li>Installing the OpenStack CLI client and authenticating</li> </ul>"},{"location":"openstackcli/How-to-install-OpenStackClient-on-Windows-using-Windows-Subsystem-for-Linux-on-3Engines-Cloud-OpenStack-Hosting.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 Computer running Microsoft Windows</p> <p>Your computer must be running Microsoft Windows. This article is written for Windows Server 2019 version 1709 or later. The instructions for the following versions are linked in the appropriate location of this article:</p> <ul> <li>Windows 10 version 1903 up to and excluding version 2004</li> <li>Windows 10 version 2004 or later (Build 19041), Windows 11</li> <li>Windows Server 2022</li> </ul> <p>No. 3 Optional \u2013 software for 2FA authentication</p> <p>Your account at 3Engines Cloud cloud may have two-factor authentication enabled. It means that apart from the usual username and password combination, you also need software to generate the TOTP \u2013 the six-digit code for the additional, second step of authentication. This article will provide additional technical details: How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-on-Windows-using-Windows-Subsystem-for-Linux-on-3Engines-Cloud-OpenStack-Hosting.html.html#step-1-check-the-version-of-windows","title":"Step 1: Check the version of Windows\ud83d\udd17","text":"<p>Right-click on your start menu and left-click \u201cSystem\u201d.</p> <p>A screen will appear in which you will see the version of your Microsoft Windows operating system. Memorize it or write it somewhere down.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-on-Windows-using-Windows-Subsystem-for-Linux-on-3Engines-Cloud-OpenStack-Hosting.html.html#step-2-install-ubuntu-on-windows-subsystem-for-linux","title":"Step 2: Install Ubuntu on Windows Subsystem for Linux\ud83d\udd17","text":"<p>Note</p> <p>The following instructions from this step are for Windows Server 2019 version 1709 or later. If you are running a different operating system, please follow the instructions found under the appropriate link and skip to Step 3:</p> <ul> <li>Windows Server 2022 - https://learn.microsoft.com/en-us/windows/wsl/install-on-server section Install WSL on Windows Server 2022</li> <li>Windows 10 version 1903 up to and excluding version 2004 - https://learn.microsoft.com/en-us/windows/wsl/install-manual</li> <li>Windows 10 version 2004 or later (Build 19041), Windows 11 - https://learn.microsoft.com/en-us/windows/wsl/install</li> </ul> <p>Enter the following website: https://learn.microsoft.com/en-us/windows/wsl/install-manual#downloading-distributions. Download Ubuntu 20.04 using the provided link. This tutorial assumes that your browser saved it in your Downloads directory - if that is not the case, please modify the instructions accordingly.</p> <p>Locate the downloaded file:</p> <p></p> <p>Right-click it and select the option Rename.</p> <p></p> <p>Rename the downloaded file to Ubuntu.zip:</p> <p></p> <p>Right-click the file and select Extract All\u2026.</p> <p></p> <p>In the wizard that appeared do not change any options and click Extract:</p> <p></p> <p>A directory called Ubuntu should have appeared:</p> <p></p> <p>Enter that folder and view its content:</p> <p></p> <p>Memorize or write somewhere down the name of the .appx file which ends with x64.</p> <p>Open your Start menu. Right-click the entry Windows PowerShell and select Run as administrator:</p> <p></p> <p>In the displayed window type the following command and press Enter:</p> <pre><code>Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux\n</code></pre> <p>The following progress bar should have appeared:</p> <p></p> <p>After the end of the process you will be asked if you want to restart your computer to complete the operation:</p> <p></p> <p>Make sure that the restart will not cause any disruptions and press Y to restart.</p> <p>During the reboot you will see the following process message:</p> <p></p> <p>Once the reboot is completed, start the PowerShell again as described previously.</p> <p>Run the following command (replace Ubuntu.appx with the name of your .appx file which you memorized or wrote somewhere down previously):</p> <pre><code>Add-AppxPackage .\\Downloads\\Ubuntu\\Ubuntu.appx\n</code></pre> <p>During the process, you will see the status bar similar to this:</p> <p></p> <p>Once the process is finished, execute the following command (replace the C:\\Users\\Administrator\\Ubuntu path with the location of your Ubuntu folder):</p> <pre><code>$userenv = [System.Environment]::GetEnvironmentVariable(\"Path\", \"User\")\n[System.Environment]::SetEnvironmentVariable(\"PATH\", $userenv + \";C:\\Users\\Administrator\\Ubuntu\", \"User\")\n</code></pre> <p>Your newly installed Ubuntu should appear in your Start menu:</p> <p></p> <p>Run it. You will see the following message:</p> <p></p> <p>Wait until this process finishes. After that, you will get a prompt asking you for your desired username (which is to be used in the installed Ubuntu):</p> <p></p> <p>Type it and press Enter. You will now be asked to provide the password for that account:</p> <p></p> <p>Note</p> <p>Your password will not be visible as you type, not even as masking characters.</p> <p>Input your password and press Enter. You will then be asked to type it again:</p> <p></p> <p>If you typed the same password twice, it will be set as the password for that account. You wil get the following message as confirmation:</p> <p></p> <p>Wait for a short time. Eventually your Linux environment will be ready:</p> <p></p>"},{"location":"openstackcli/How-to-install-OpenStackClient-on-Windows-using-Windows-Subsystem-for-Linux-on-3Engines-Cloud-OpenStack-Hosting.html.html#step-3-install-openstack-cli-in-an-isolated-python-environment","title":"Step 3: Install OpenStack CLI in an isolated Python environment\ud83d\udd17","text":"<p>Now that you have installed Windows Subsystem on Linux running Ubuntu on your Windows computer, it is time to install OpenStack CLI.</p> <p>Update the software running on your Ubuntu:</p> <pre><code>sudo apt update &amp;&amp; sudo apt upgrade\n</code></pre> <p>Once the process is finished, install the python3-venv package to create a separate Python environment:</p> <pre><code>sudo apt install python3-venv\n</code></pre> <p>Create a virtual environment in which you will have OpenStack CLI installed:</p> <pre><code>python3 -m venv openstack_cli\n</code></pre> <p>Enter your new virtual environment:</p> <pre><code>source openstack_cli/bin/activate\n</code></pre> <p>Upgrade pip to the latest version:</p> <pre><code>pip install --upgrade pip\n</code></pre> <p>Install the python-openstackclient package:</p> <pre><code>pip install python-openstackclient\n</code></pre> <p>Verify that the OpenStack CLI works by viewing its help:</p> <pre><code>openstack --help\n</code></pre> <p>If the command shows its output using a pager, you should be able to use the arrows (or vim keys - J and K) to scroll and Q to exit.</p> <p>If everything seems to work, time to move to the next step - authentication to your user account on 3Engines Cloud.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-on-Windows-using-Windows-Subsystem-for-Linux-on-3Engines-Cloud-OpenStack-Hosting.html.html#step-4-download-your-openstack-rc-file","title":"Step 4: Download your OpenStack RC File\ud83d\udd17","text":"<p>Login to 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>Click on your username in the upper right corner. You will see the following menu:</p> <p></p> <p>If your account has two factor authentication enabled, click the option OpenStack RC File (2FA). If, however, it does not have it enabled, use the OpenStack RC File option.</p> <p>The RC file will be downloaded. Memorize or write somewhere down the name of that file. Move this file to the root location of your C: drive.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-on-Windows-using-Windows-Subsystem-for-Linux-on-3Engines-Cloud-OpenStack-Hosting.html.html#step-5-move-the-rc-file-to-your-ubuntu-environment","title":"Step 5: Move the RC file to your Ubuntu environment\ud83d\udd17","text":"<p>Return to your Ubuntu window.</p> <p>You will now copy your RC file to your Ubuntu environment. Since Windows Subsystem for Linux mounts the C: drive under /mnt/c, the command for copying your RC file to your Ubuntu environment is as follows (replace main-openrc.sh with the name of your RC file):</p> <pre><code>cp /mnt/c/main-openrc.sh $HOME\n</code></pre> <p>If your account uses two-factor authentication, you will need jq to activate access to your cloud environment. To install jq, execute:</p> <pre><code>sudo apt install -y jq\n</code></pre> <p>Now use the source command on this file to begin the authentication process (replace main-rc.sh with the name of your RC file):</p> <pre><code>source main-openrc.sh\n</code></pre> <p>You will see the prompt for password to your 3Engines Cloud account. Type your password there and press Enter (the password is still being accepted even if you do not see the characters being typed).</p> <p>If your account has two factor authentication enabled, you will also see the prompt for your six-digit code. Open software which you use for generating such codes (for example KeePassXC or FreeOTP) and find your code there, as usual. Make sure that you enter it before it expires. If you think that you will not manage to enter your current code, wait until a new one is generated.</p> <p>After having entered your code, press Enter.</p> <p>Now you can test whether you have successfully authenticated by listing your VMs:</p> <pre><code>openstack server list\n</code></pre>"},{"location":"openstackcli/How-to-install-OpenStackClient-on-Windows-using-Windows-Subsystem-for-Linux-on-3Engines-Cloud-OpenStack-Hosting.html.html#how-to-run-this-environment-later","title":"How to run this environment later?\ud83d\udd17","text":"<p>If you close the window with Ubuntu and reopen it, you will see that you are no longer in the openstack_cli environment you created and thus no longer have access to OpenStack. You will need to reenter the openstack_cli environment and reauthenticate.</p> <p>After reopening the Ubuntu Window, execute the source command on the file used for entering you openstack_cli environment, just like previously:</p> <pre><code>source openstack_cli/bin/activate\n</code></pre> <p>Now, reauthenticate by invoking the source comand on your RC file (replace main-openrc.sh with the name of your RC file):</p> <pre><code>source main-openrc.sh\n</code></pre> <p>Type your password and press Enter. You should now be able execute the OpenStack CLI commands as usual.</p>"},{"location":"openstackcli/How-to-install-OpenStackClient-on-Windows-using-Windows-Subsystem-for-Linux-on-3Engines-Cloud-OpenStack-Hosting.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>After installing the OpenStack CLI client and activating your new RC file, you can use other articles to perform operations on 3Engines Cloud cloud:</p> <p>How to create a set of VMs using OpenStack Heat Orchestration on 3Engines Cloud</p> <p>Generating and authorizing Terraform using Keycloak user on 3Engines Cloud</p> <p>How to upload your custom image using OpenStack CLI on 3Engines Cloud</p> <p>How to create a VM using the OpenStack CLI client on 3Engines Cloud cloud</p> <p>How To Use Command Line Interface for Kubernetes Clusters On 3Engines Cloud OpenStack Magnum</p>"},{"location":"openstackcli/How-to-move-data-volume-between-two-VMs-using-OpenStack-CLI-on-3Engines-Cloud.html.html","title":"How to move data volume between VMs using OpenStack CLI on 3Engines Cloud\ud83d\udd17","text":"<p>Volumes are used to store data and those data can be accessed from a virtual machine to which the volume is attached. To access data stored on a volume from another virtual machine, you need to disconnect that volume from virtual machine to which it is currently connected, and connect it to another instance.</p> <p>This article uses OpenStack CLI client to transfer volumes between virtual machines which are in the same project.</p>"},{"location":"openstackcli/How-to-move-data-volume-between-two-VMs-using-OpenStack-CLI-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 OpenStack CLI client</p> <p>To be able to use the OpenStack CLI client, you need to have it installed. One of these articles should help:</p>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html","title":"How to share private container from object storage to another user on 3Engines Cloud\ud83d\udd17","text":"<p>You can create your own private containers in Object Store of your projects and you can grant access to other users.</p> <p>If you want to limit the access for chosen users to specific containers, the other users have to be the members of other projects (it is recommended one user or group of users per one project).</p> <p>The project can be in one or more domains.</p> <p>Otherwise, if users are members of the same project, they see all containers in that project and you cannot limit access to specific containers.</p>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Hosting</p> <p>You need a 3Engines Cloud hosting account with Horizon interface https://horizon.3Engines.com.</p> <p>No. 2 OpenStack client installed and connected to the cloud</p> <p>The following article will help you install Python and OpenStack client called openstack and will also help you connect to the cloud How to install OpenStackClient for Linux on 3Engines Cloud).</p> <p>No. 3 Knowledge of downloading and working with RC files</p> <p>To be able to share private containers, you will have to manipulate RC files from the cloud. The following article will provide technical details:</p> <p>How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication</p> <p>No. 4. Using OpenStack Swift module</p> <p>The OpenStack Object Store module, known as Swift, allows you to store and retrieve data with a simple API. It\u2019s built for scale and is optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound.</p> <p>See How to access object storage using OpenStack CLI on 3Engines Cloud</p>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html#setting-up-the-test-example","title":"Setting up the test example\ud83d\udd17","text":"<p>In the example below there are three projects:</p> <ol> <li>\u201cmain\u201d,</li> <li>\u201cproject_1\u201d,</li> <li>\u201cproject_2\u201d.</li> </ol> <p></p> <p>\u2026 and three users:</p> <p>All clouds</p> <ol> <li>\u201cowner\u201d - the user with member role in project \u201cmain\u201d,</li> <li>\u201cuser_1\u201d - the user with member role in project \u201cproject_1\u201d,</li> <li>\u201cuser_2\u201d - the user with member role in project \u201cproject_2\u201d.</li> </ol> <p></p> <p>The user \u201cowner\u201d has three containers in their project \u201cmain\u201d\u2026</p> <ol> <li>c-main-a,</li> <li>c-main-b,</li> <li>c-main-d.</li> </ol> <p></p> <p>\u2026and the following files in the containers:</p> <ul> <li> <p>c-main-a</p> </li> <li> <p>test-main-a1.txt</p> </li> <li>test-main-a2.txt</li> <li> <p>c-main-b</p> </li> <li> <p>test-main-b.txt</p> </li> <li> <p>c-main-d</p> </li> <li> <p>test-main-d.txt</p> </li> </ul> <p>In the example below, the user \u201cowner\u201d will grant \u201cread only\u201d access to container \u201cc-main-a\u201d for \u201cuser_1\u201d</p>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html#download-the-rc-file-to-share-permissions-with-users","title":"Download the RC file to share permissions with users\ud83d\udd17","text":"<p>Firstly, the user \u201cowner\u201d should login to their domain if they didn\u2019t do it yet:</p> <p></p> <p>Then, they should choose the main project:</p> <p></p> <p>After that, they should download the \u201cOpenStack RC File\u201d for the user \u201cowner\u201d and the project \u201cmain\u201d:</p> <p></p> <p>Note</p> <p>We shall assume the simplest case in which all three users have access to the cloud with one-factor authentication. If two-factor authentication is enabled, then the owner will have to share the six-digit code that is needed for the second factor of authentication.</p> <p>You can preview the content of that file in your Linux terminal:</p> <pre><code>$ cat main-openrc.sh\n</code></pre> <p>main-openrc.sh</p> <pre><code>#!/usr/bin/env bash\n# To use an OpenStack cloud you need to authenticate against the Identity\n# service named keystone, which returns a **Token** and **Service Catalog**.\n# The catalog contains the endpoints for all services the user/tenant has\n# access to - such as Compute, Image Service, Identity, Object Storage, Block\n# Storage, and Networking (code-named nova, glance, keystone, swift,\n# cinder, and neutron).\n#\n# *NOTE*: Using the 3 *Identity API* does not necessarily mean any other\n# OpenStack API is version 3. For example, your cloud provider may implement\n# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is\n# only for the Identity API served through keystone.\nunset OS_TENANT_ID\nunset OS_TENANT_NAME\nexport OS_AUTH_URL=https://keystone.3Engines.com:5000/v3\nexport OS_INTERFACE=public\nexport OS_IDENTITY_API_VERSION=3\nexport OS_USERNAME=\"owner\"\nexport OS_REGION_NAME=\"WAW3-1\"\nexport OS_PROJECT_ID=ab0c8e1710854b92b0be2b40b31a615a\nexport OS_PROJECT_NAME=\"main_project\"\nexport OS_PROJECT_DOMAIN_ID=\"119f4676f307434eaf28daab5ba3cc92\"\nif [ -z \"$OS_REGION_NAME\" ]; then unset OS_REGION_NAME; fi\nif [ -z \"$OS_USER_DOMAIN_NAME\" ]; then unset OS_USER_DOMAIN_NAME; fi\nif [ -z \"$OS_PROJECT_DOMAIN_ID\" ]; then unset OS_PROJECT_DOMAIN_ID; fi\necho \"Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: \"\nread -sr OS_PASSWORD_INPUT\nexport OS_PASSWORD=$OS_PASSWORD_INPUT\nexport OS_AUTH_TYPE=password\nexport OS_USER_DOMAIN_NAME=\"cloud_00373\" # ****IF THIS LINE IS MISSING IN YOUR FILE PLEASE ADD IT!!!****\n</code></pre>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html#sharing-the-rc-file-with-the-users","title":"Sharing the RC file with the users\ud83d\udd17","text":"<p>Copy the file main-openrc.sh to your CLI directory.</p> <p>The user called \u201cuser_1\u201d should do the same procedure:</p> <ol> <li>login to their \u201cproject_1\u201d</li> <li>download the \u201cOpenStack RC File\u201d for user \u201cuser_1\u201d and project \u201cproject_1\u201d</li> </ol> <p>project_1-openrc.sh</p> <pre><code>#!/usr/bin/env bash\n# To use an OpenStack cloud you need to authenticate against the Identity\n# service named keystone, which returns a **Token** and **Service Catalog**.\n# The catalog contains the endpoints for all services the user/tenant has\n# access to - such as Compute, Image Service, Identity, Object Storage, Block\n# Storage, and Networking (code-named nova, glance, keystone, swift,\n# cinder, and neutron).\n#\n# *NOTE*: Using the 3 *Identity API* does not necessarily mean any other\n# OpenStack API is version 3. For example, your cloud provider may implement\n# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is\n# only for the Identity API served through keystone.\nunset OS_TENANT_ID\nunset OS_TENANT_NAME\nexport OS_AUTH_URL=https://keystone.3Engines.com:5000/v3\nexport OS_INTERFACE=public\nexport OS_IDENTITY_API_VERSION=3\nexport OS_USERNAME=\"user_1\"\nexport OS_REGION_NAME=\"WAW3-1\"\nexport OS_PROJECT_ID=4d488c376c0b4bc79a60b56bc72834e8\nexport OS_PROJECT_NAME=\"p_project_1\"\nexport OS_PROJECT_DOMAIN_ID=\"119f4676f307434eaf28daab5ba3cc92\"\nif [ -z \"$OS_REGION_NAME\" ]; then unset OS_REGION_NAME; fi\nif [ -z \"$OS_USER_DOMAIN_NAME\" ]; then unset OS_USER_DOMAIN_NAME; fi\nif [ -z \"$OS_PROJECT_DOMAIN_ID\" ]; then unset OS_PROJECT_DOMAIN_ID; fi\necho \"Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: \"\nread -sr OS_PASSWORD_INPUT\nexport OS_PASSWORD=$OS_PASSWORD_INPUT\nexport OS_AUTH_TYPE=password\nexport OS_USER_DOMAIN_NAME=\"cloud_00373\" # ****IF THIS LINE IS MISSING IN YOUR FILE PLEASE ADD IT!!!****\n</code></pre> <p>The called \u201cuser_2\u201d should do the same procedure as above.</p>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html#owner-sources-the-rc-file","title":"Owner sources the RC file\ud83d\udd17","text":"<p>Now, each user should open their terminal and source the openrc file:</p> <p>terminal of user \u201cowner\u201d</p> <pre><code>$ source main-openrc.sh\nPlease enter your OpenStack Password for project main as user owner: &lt;here enter the password for owner&gt;\n\n(owner) $ swift list\nc-main-a\nc-main-b\nc-main-d\n</code></pre>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html#user_1-sources-the-rc-file","title":"User_1 sources the RC file\ud83d\udd17","text":"<p>terminal of user \u201cuser_1\u201d:</p> <pre><code>$ source project_1-openrc.sh\nPlease enter your OpenStack Password for project project_1 as user user_1:\n &lt;here enter the password for user_1&gt;\n\n(user_1) $ swift list\nc-project_1-a\nc-project_1-b\n</code></pre>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html#user_2-sources-the-rc-file","title":"User_2 sources the RC file\ud83d\udd17","text":"<p>terminal of user \u201cuser_2\u201d:</p> <pre><code>$ source project_2-openrc.sh\nPlease enter your OpenStack Password for project project_2 as user user_2: &lt;here enter the password for user_2&gt;\n\n(user_2) $ swift list\nc-project_2-a\nc-project_2-b\n</code></pre>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html#uploading-of-test-files","title":"Uploading of test files\ud83d\udd17","text":"<p>The user \u201cowner\u201d prepares and uploads test files:</p> <pre><code>(owner) $ touch test-main-a1.txt\n(owner) $ touch test-main-a2.txt\n(owner) $ swift upload c-main-a test-main-a1.txt\ntest-main-a1.txt\n(owner) $ swift upload c-main-a test-main-a2.txt\ntest-main-a2.txt\n</code></pre> <p></p> <pre><code>(owner) $ touch test-main-b.txt\n(owner) $ touch test-main-d.txt\n(owner) $ swift upload c-main-b test-main-b.txt\ntest-main-b.txt\n</code></pre> <p></p> <pre><code>(owner) $ swift upload c-main-d test-main-d.txt\ntest-main-d.txt\n</code></pre> <p></p> <p>Check the id of user_1:</p> <pre><code>(user_1) $ openstack user show --format json \"${OS_USERNAME}\" | jq -r .id\n3de5f40b4e6d433792ac387896729ec8\n</code></pre> <p>Check the id of user_2:</p> <pre><code>(user_2) $ openstack user show --format json \"${OS_USERNAME}\" | jq -r .id\nfb4ec0de674d4c5ba608ee75cc6da918\n</code></pre> <p>You can check the status of the container \u201cc-main-a\u201d.</p> <p>\u201cRead ACL\u201d and \u201cWrite ACL\u201d are not set yet</p> <pre><code>(owner) $ swift stat c-main-a\n Account: v1\n Container: c-main-a\n Objects: 2\n Bytes: 29\n Read ACL: *:3de5f40b4e6d433792ac387896729ec8\n Write ACL: *:3de5f40b4e6d433792ac387896729ec8\n Sync To:\n Sync Key:\n X-Timestamp: 1655199342.39064\nX-Container-Bytes-Used-Actual: 8192\n X-Storage-Policy: default-placement\n X-Storage-Class: STANDARD\n Last-Modified: Tue, 14 Jun 2022 13:41:32 GMT\n X-Trans-Id: tx000000000000003964e44-0062b17ebb-17404e6b-default\n X-Openstack-Request-Id: tx000000000000003964e44-0062b17ebb-17404e6b-default\n Accept-Ranges: bytes\n Content-Type: text/plain; charset=utf-8\n</code></pre>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html#granting-access","title":"Granting access\ud83d\udd17","text":"<p>Grant access to container \u201cc-main-a\u201d for user_1:</p> <pre><code>(owner) $ swift post --read-acl \"*:3de5f40b4e6d433792ac387896729ec8 \" c-main-a\n</code></pre> <p>Get the credentials to access Object Store in \u201cmain\u201d:</p> <pre><code>(owner) $ swift auth | awk -F = '/OS_STORAGE_URL/ {print $2}'\nhttps://s3.waw3-1.3Engines.com/swift/v1\n</code></pre> <p>Pass the link:</p> <pre><code>https://s3.waw3-1.3Engines.com/swift/v1\n</code></pre> <p>to \u201cuser_1\u201d</p> <p>\u201cuser_1\u201d should create an environmental variable \u201cSURL\u201d</p> <pre><code>(user_1) $ SURL=https://s3.waw3-1.3Engines.com/swift/v1\n</code></pre> <p>Now the \u201cuser_1\u201d has access to the \u201cc-main-a\u201d container in the \u201cmain\u201d project:</p> <pre><code>(user_1) $ swift --os-storage-url=\"${SURL}\" list c-main-a\ntest-main-a1.txt\ntest-main-a2.txt\n</code></pre> <p>But the user \u201cuser_1\u201d has no access to other containers in the \u201cmain\u201d project:</p> <pre><code>(user_1) $ swift --os-storage-url=\"${SURL}\" list c-main-b\nContainer GET failed: https://s3.waw3-1.3Engines.com/swift/v1/c-main-b?format=json 403 Forbidden [first 60\nchars of response] b'{\"Code\":\"AccessDenied\",\"BucketName\":\"c-main-b\",\"RequestId\":\"'\nFailed Transaction ID: tx00000000000000397edda-0062b186ef-17379d9b-default\n</code></pre> <p>Similar procedure can be used to grant \u201cwrite\u201d permission to \u201cuser_1\u201d:</p> <pre><code>(owner) $ swift post --write-acl \"*:3de5f40b4e6d433792ac387896729ec8 \" c-main-a\n</code></pre>"},{"location":"openstackcli/How-to-share-private-container-from-object-storage-to-another-user-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>These articles can also be of interest:</p> <p>How to use Object Storage on 3Engines Cloud.</p> <p>Bucket sharing using s3 bucket policy on 3Engines Cloud</p>"},{"location":"openstackcli/How-to-start-a-VM-from-instance-snapshot-using-OpenStack-CLI-on-3Engines-Cloud.html.html","title":"How to start a VM from instance snapshot using OpenStack CLI on 3Engines Cloud\ud83d\udd17","text":"<p>In this article, you will learn how to create a virtual machine from an instance snapshot using OpenStack CLI client.</p>"},{"location":"openstackcli/How-to-start-a-VM-from-instance-snapshot-using-OpenStack-CLI-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 OpenStack CLI client</p> <p>You need to have OpenStack CLI client installed. One of the following articles should help you:</p>"},{"location":"openstackcli/How-to-transfer-volumes-between-domains-and-projects-using-OpenStack-CLI-client-on-3Engines-Cloud.html.html","title":"How to transfer volumes between domains and projects using OpenStack CLI client on 3Engines Cloud\ud83d\udd17","text":"<p>Volumes in OpenStack can be used to store data. They are visible to virtual machines like drives.</p> <p>Such a volume is usually available to just the project in which it was created. Transferring data stored on it between projects might take a long time, especially if such a volume contains lots of data, like, say, hundreds or thousands of gigabytes (or even more).</p> <p>This article covers changing the assignment of a volume to a project. This allows you to move a volume directly from one project (which we will call source project) to another (which we will call destination project) using the OpenStack CLI in a way that does not require you to physically transfer the data.</p> <p>The source project and destination project must both be on the same cloud (for example WAW3-2). They can (but don\u2019t have to) belong to different users from different domains and organizations.</p>"},{"location":"openstackcli/How-to-transfer-volumes-between-domains-and-projects-using-OpenStack-CLI-client-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Initializing transfer of volume</li> <li>Accepting transfer of volume</li> <li>Cancelling transfer of volume</li> </ul>"},{"location":"openstackcli/How-to-transfer-volumes-between-domains-and-projects-using-OpenStack-CLI-client-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com</p> <p>No. 2 OpenStack CLI Client</p> <p>To use the OpenStack CLI client, you need to have it installed. See one of these articles to learn how to do it:</p>"},{"location":"openstackcli/Resizing-a-virtual-machine-using-OpenStack-CLI-on-3Engines-Cloud.html.html","title":"Resizing a virtual machine using OpenStack CLI on 3Engines Cloud\ud83d\udd17","text":""},{"location":"openstackcli/Resizing-a-virtual-machine-using-OpenStack-CLI-on-3Engines-Cloud.html.html#introduction","title":"Introduction\ud83d\udd17","text":"<p>When creating a new virtual machine under OpenStack, one of the options you choose is the flavor. A flavor is a predefined combination of CPU, memory and disk size and there usually is a number of such flavors for you to choose from.</p> <p>After the instance is spawned, it is possible to change one flavor for another, and that process is called resizing. You might want to resize an already existing VM in order to:</p> <ul> <li>increase (or decrease) the number of CPUs used,</li> <li>use more RAM to prevent crashes or enable swapping,</li> <li>add larger storage to avoid running out of disk space,</li> <li>seamlessly transition from testing to production environment,</li> <li>change application workload byt scaling the VM up or down.</li> </ul> <p>In this article, we are going to resize VMs using CLI commands in OpenStack.</p>"},{"location":"openstackcli/Resizing-a-virtual-machine-using-OpenStack-CLI-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://portal.3Engines.com/.</p> <p>If you are a normal user of 3Engines Cloud hosting, you will have all prerogatives needed to resize the VM. Make sure that the VM you are about to resize belongs to a project you have access to.</p> <p>How to create a VM using the OpenStack CLI client on 3Engines Cloud cloud</p> <p>No. 2 Awareness of existing quotas and flavors limits</p> <p>For general introduction to quotas and flavors, see Dashboard Overview \u2013 Project Quotas And Flavors Limits on 3Engines Cloud.</p> <p>Also:</p> <ul> <li>The VM you want to resize is in an active or shut down state.</li> <li>A flavor with the desired resource configuration exists.</li> <li>Adequate resources are available in your OpenStack environment to accommodate the resize.</li> </ul>"},{"location":"openstackcli/Resizing-a-virtual-machine-using-OpenStack-CLI-on-3Engines-Cloud.html.html#creating-a-new-vm","title":"Creating a new VM\ud83d\udd17","text":"<p>To illustrate the commands in this article, let us create a new VM in order to start with a clean slate. (It goes without saying that you can practice with any of the already existing VMs in your account.)</p> <p>To see all flavors:</p> <pre><code>openstack flavor list\n</code></pre> <p></p> <p>This is the command to create a new VM called ResizingCLI:</p> <pre><code>openstack server create \\\n--image \"Ubuntu 22.04 LTS\" \\\n--flavor eo2a.large \\\n--key-name sshkey \\\n--network cloud_00341_3 \\\n--security-group default \\\n--security-group allow_ping_ssh_icmp_rdp \\\nResizingCLI\n</code></pre> <p>This is the result:</p> <p></p> <p>The id for ResizingCLI is 82bba971-8ff1-4f85-93d6-9d56bb7b185d and we can use it in various commands to denote this particular VM.</p> <p>To see all currently available VMs, use command</p> <pre><code>openstack server list\n</code></pre>"},{"location":"openstackcli/Resizing-a-virtual-machine-using-OpenStack-CLI-on-3Engines-Cloud.html.html#steps-to-resize-the-vm","title":"Steps to Resize the VM\ud83d\udd17","text":"<p>To resize a VM with CLI, there is a general command</p> <pre><code>openstack server resize --flavor &lt;new_flavor&gt; &lt;vm_name_or_id&gt;\n</code></pre> <p>We need flavor ID or name as well as VM\u2019s name or id.</p> <p>In this example we want to scale up the existing VM ResizingCLI, using eo2.xlarge flavor. The command will be:</p> <pre><code>openstack server resize --flavor eo2.xlarge ResizingCLI\n</code></pre> <p>To verify the resize, check the status of the VM:</p> <pre><code>openstack server show ResizingCLI\n</code></pre> <p></p> <p>When the VM has VERIFY_RESIZE status, we are able to confirm the resize. The command is:</p> <pre><code>openstack server resize confirm ResizingCLI\n</code></pre> <p>Execute once again:</p> <pre><code>openstack server show ResizingCLI\n</code></pre> <p>to see the real state of the VM after confirmation. We will now see that the status is ACTIVE.</p>"},{"location":"openstackcli/Resizing-a-virtual-machine-using-OpenStack-CLI-on-3Engines-Cloud.html.html#reverting-a-resize","title":"Reverting a resize\ud83d\udd17","text":"<p>Reverting a resize switches the VM back to its original flavor and cleans up temporary resources allocated during the resize operation.</p> <p>It is only possible to revert a resize if the status is VERIFY_RESIZE. The command would be:</p> <pre><code>openstack server resize revert ResizingCLI\n</code></pre> <p>If status is not VERIFY_RESIZE, we will get message stating that it is not possible to revert resize while it is in an active state (HTTP 409). In that case, perform the \u201cregular\u201d resizing with openstack server resize.</p>"},{"location":"openstackcli/Resizing-a-virtual-machine-using-OpenStack-CLI-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You can also resize the virtual machine using only OpenStack CLI. More details here: /openstackcli/Resizing-a-virtual-machine-using-OpenStack-Horizon-on-3Engines-Cloud</p>"},{"location":"openstackcli/Use-backup-command-to-create-rotating-backups-of-virtual-machines-on-3Engines-Cloud.html.html","title":"Use backup command to create rotating backups of virtual machines on 3Engines Cloud cloud\ud83d\udd17","text":"<p>Rotating backups in OpenStack refer to a backup strategy where older backups are automatically deleted after a predefined number of backups are created. This ensures that storage does not grow indefinitely while still maintaining a set number of recent backups for disaster recovery.</p>"},{"location":"openstackcli/Use-backup-command-to-create-rotating-backups-of-virtual-machines-on-3Engines-Cloud.html.html#the-rotating-backup-algorithm","title":"The rotating backup algorithm\ud83d\udd17","text":"<p>Creating rotating backups of virtual machines is a process comprising of the following steps:</p> Define the period of backups Usually, daily, weekly, monthly or in any other time period. Define rotation limit How many backups to retain (we will refer to this number as maxN throughout this article). Delete older backups Once the limit is reached, start deleting the existing backups, usually the oldest one."},{"location":"openstackcli/Use-backup-command-to-create-rotating-backups-of-virtual-machines-on-3Engines-Cloud.html.html#backup-create-vs-image-create","title":"backup create vs. image create\ud83d\udd17","text":"<p>There are two ways of creating backups under OpenStack, using one of these two commands:</p> <p>openstack server backup create and openstack server image create</p> <p>Here is how they compare:</p> <p>Table 3 Comparison of Backup and Image Creation Commands\ud83d\udd17</p> Feature <code>openstack server backup create</code> <code>openstack server image create</code> Association with VM Associated using backup image property Associated using backup name Rotation support Rotation with <code>--backup-type</code> and incremental backups No built-in rotation support Classification in Horizon Marked as image Marked as snapshot Horizon Select Boot Source Choose Instance Snapshot Choose Image Purpose Primarily used for backups, can be rotated and managed Creates a single VM snapshot without rotation Incremental backup support Yes, supports incremental backups No, always creates a full snapshot Multiple rotating schedules No, only one Yes (daily, weekly, monthly etc.) Best usage scenario Automated backup strategies with rotation Capturing the current state of a VM for cloning or rollback Can be scripted? Yes Yes <p>In this article we are going to use a openstack server backup create command under OpenStack to create rotating backups of virtual machines.</p>"},{"location":"openstackcli/Use-backup-command-to-create-rotating-backups-of-virtual-machines-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com</p> <p>No. 2 VM which will be backed up</p> <p>You need a virtual machine which will be backed up. If you don\u2019t have one, you can create it by following one of these articles:</p>"},{"location":"openstackcli/Use-script-to-create-daily-weekly-and-monthly-rotating-backups-of-virtual-machines-using-on-3Engines-Cloud.html.html","title":"Use script to create daily weekly and monthly rotating backups of virtual machines on 3Engines Cloud\ud83d\udd17","text":"<p>Rotating backups in OpenStack refer to a backup strategy where older backups are automatically deleted after a predefined number of backups are created. This ensures that storage does not grow indefinitely while still maintaining a set number of recent backups for disaster recovery.</p>"},{"location":"openstackcli/Use-script-to-create-daily-weekly-and-monthly-rotating-backups-of-virtual-machines-using-on-3Engines-Cloud.html.html#backup-create-vs-image-create","title":"backup create vs. image create\ud83d\udd17","text":"<p>There are two ways of creating backups under OpenStack, using one of these two commands:</p> <p>openstack server backup create and openstack server image create</p> <p>Here is how they compare:</p> <p>Table 4 Comparison of Backup and Image Creation Commands\ud83d\udd17</p> Feature <code>openstack server backup create</code> <code>openstack server image create</code> Association with VM Associated using backup image property Associated using backup name Rotation support Rotation with <code>--backup-type</code> and incremental backups No built-in rotation support Classification in Horizon Marked as image Marked as snapshot Horizon Select Boot Source Choose Instance Snapshot Choose Image Purpose Primarily used for backups, can be rotated and managed Creates a single VM snapshot without rotation Multiple rotating schedules No, only one Yes (daily, weekly, monthly etc.) Incremental backup support Yes, supports incremental backups No, always creates a full snapshot Best usage scenario Automated backup strategies with rotation Capturing the current state of a VM for cloning or rollback Can be scripted? Yes Yes <p>In this article, you will learn how to create multiple series of rotating backups with a script which uses multiple OpenStackClient commands to achieve this goal.</p>"},{"location":"openstackcli/Use-script-to-create-daily-weekly-and-monthly-rotating-backups-of-virtual-machines-using-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com</p> <p>No. 2 VM which will be backed up</p> <p>You need a virtual machine which will be backed up. If you don\u2019t have one, you can create it by following one of these articles:</p>"},{"location":"openstackcli/openstackcli.html.html","title":"OpenStack CLI","text":""},{"location":"openstackcli/openstackcli.html.html#available-documentation","title":"Available Documentation","text":"<ul> <li>How to Backup an Instance and Download it to the Desktop on 3Engines Cloud OpenStack Hosting</li> <li>How to create a set of VMs using OpenStack Heat Orchestration on 3Engines Cloud</li> <li>How to Create and Configure New Openstack Project Through Horizon on 3Engines Cloud Cloud</li> <li>How to install OpenStackClient for Linux on 3Engines Cloud</li> <li>How to install OpenStackClient GitBash for Windows on 3Engines Cloud</li> <li>How to share private container from object storage to another user on 3Engines Cloud</li> <li>How to install OpenStackClient on Windows using Windows Subsystem for Linux on 3Engines Cloud OpenStack Hosting</li> <li>How to move data volume between VMs using OpenStack CLI on 3Engines Cloud</li> <li>How to access object storage using OpenStack CLI on 3Engines Cloud</li> <li>How to transfer volumes between domains and projects using OpenStack CLI client on 3Engines Cloud</li> <li>How to start a VM from instance snapshot using OpenStack CLI on 3Engines Cloud</li> <li>How to create instance snapshot using OpenStack CLI on 3Engines Cloud</li> <li>Resizing a virtual machine using OpenStack CLI on 3Engines Cloud</li> <li>Use backup command to create rotating backups of virtual machines on 3Engines Cloud cloud</li> <li>Use script to create daily weekly and monthly rotating backups of virtual machines on 3Engines Cloud</li> </ul>"},{"location":"openstackdev/Authenticating-to-OpenstackSDK-using-Keycloak-Credentials-on-3Engines-Cloud.html.html","title":"Authenticating with OpenstackSDK using Keycloak Credentials on 3Engines Cloud\ud83d\udd17","text":"<p>If you are using OpenStackSDK to write your own script for OpenStack, the code in this tutorial will enable the user to automatically log into your app. When the user normally tries to log into the 3Engines Cloud account using https://portal.3Engines.com/, they have to log in manually. A screen like this appears:</p> <p></p> <p>If they already have an account, they will be logged in after clicking on Login button. The code in this article will avoid exposing the user to such a procedure and if they had ever been authenticated to OpenStack, the user will be able to log in with your code without even seeing the login screen.</p>"},{"location":"openstackdev/Authenticating-to-OpenstackSDK-using-Keycloak-Credentials-on-3Engines-Cloud.html.html#what-are-we-going-to-do","title":"What Are We Going To Do\ud83d\udd17","text":"<ul> <li>Set up Python, pip and Venv environments,</li> <li>Download RC file from Horizon,</li> <li>Source that file (execute it and supply the password to authenticate yourself to the system),</li> <li>Prepare Python code to authenticate to Keycloak by using the values from RC file.</li> </ul>"},{"location":"openstackdev/Authenticating-to-OpenstackSDK-using-Keycloak-Credentials-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Install Python and its environment</p> <p>The following article will help you install Python and pip, as well as Venv: How to install Python virtualenv or virtualenvwrapper on 3Engines Cloud.</p> <p>No. 2 RC File</p> <p>RC file is available from the OpenStack Horizon module and serves as a source of authentication for the user. For technical details how to get it and activate, see How To Install OpenStack and Magnum Clients for Command Line Interface to 3Engines Cloud Horizon.</p>"},{"location":"openstackdev/Authenticating-to-OpenstackSDK-using-Keycloak-Credentials-on-3Engines-Cloud.html.html#step-1-source-your-rc-file","title":"Step 1 Source Your RC File\ud83d\udd17","text":"<p>Using Prerequisite No. 2, download the corresponding RC file. That file can be executed using a source command in Linux/UNIX environments. Once executed, it will ask you for the password and will authenticate you with it.</p> <p>Here are the system variables (their names all start with OS_) that the source command will set up as well:</p> <pre><code>export OS_AUTH_URL=https://keystone.3Engines.com:5000/v3\nexport OS_INTERFACE=public\nexport OS_IDENTITY_API_VERSION=3\nexport OS_USERNAME=\"Your E-mail Adress\"\nexport OS_REGION_NAME=\"WAW3-1\"\nexport OS_PROJECT_ID=\"Your Project ID\"\nexport OS_PROJECT_NAME=\"Your Project Name\"\nexport OS_PROJECT_DOMAIN_ID=\"Your Domain ID\"\n\nexport OS_AUTH_TYPE=v3oidcpassword\nexport OS_PROTOCOL=openid\nexport OS_DISCOVERY_ENDPOINT=https://identity.3Engines.com/auth/realms/Creodias-new/.well-known/openid-configuration\nexport OS_IDENTITY_PROVIDER=ident_creodias-new_provider\nexport OS_CLIENT_ID=openstack\nexport OS_CLIENT_SECRET=50xx4972-546x-46x9-8x72-x91x401x8x30\n</code></pre>"},{"location":"openstackdev/Authenticating-to-OpenstackSDK-using-Keycloak-Credentials-on-3Engines-Cloud.html.html#step-2-create-python-code-that-will-perform-keycloak-authentication-within-your-app","title":"Step 2 Create Python Code that Will Perform Keycloak Authentication Within Your App\ud83d\udd17","text":"<p>In this step you will copy the values from RC file to your Python code. For instance, variable</p> <pre><code>OS_DISCOVERY_ENDPOINT=https://identity.3Engines.com/auth/realms/Creodias-new/.well-known/openid-configuration\n</code></pre> <p>from RC file will become the value of the eponymous variable in your code:</p> <pre><code>auth['discovery_endpoint'] = \"https://identity.3Engines.com/auth/realms/Creodias-new/.well-known/openid-configuration\"\n</code></pre> <p>Here is what your code should look like in the end:</p> <pre><code>from openstack import connection\nimport sys\nimport os\nfrom openstack import enable_logging\n\nauth = {}\nauth['auth_url'] = \"https://keystone.3Engines.com:5000/v3\"\nauth['username'] = \"Your E-mail Adress\"\nauth['password'] = os.getenv('OS_PASSWORD')\nauth['project_domain_id'] = \"Your Domain ID\"\nauth['project_name'] = \"Your Project Name\"\nauth['project_id'] = \"Your Project ID\"\nauth['discovery_endpoint'] = \"https://identity.3Engines.com/auth/realms/Creodias-new/.well-known/openid-configuration\"\nauth['client_id'] = \"openstack\"\nauth['identity_provider'] = 'ident_creodias-new_provider'\nauth['client_secret'] = os.getenv('OS_CLIENT_SECRET')\nauth['protocol'] = 'openid'\n</code></pre>"},{"location":"openstackdev/Authenticating-to-OpenstackSDK-using-Keycloak-Credentials-on-3Engines-Cloud.html.html#step-3-use-the-code-in-your-app","title":"Step 3 Use the Code in Your App\ud83d\udd17","text":"<p>Once generated, this code will authenticate user and they will not have to supply their credentials each time they try to use your app.</p>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html","title":"Generating and authorizing Terraform using Keycloak user on 3Engines Cloud\ud83d\udd17","text":"<p>Clicking in Horizon and entering CLI commands are two main ways of using an OpenStack system. They are well suited to interactively executing one command at a time but do not scale up easily. A tool such as Terraform, by HashiCorp corporation, provides an alternative to manual ways of introducing cascading changes. Here is how you could, say, create several instances at once:</p> <ul> <li>Define parameters for the creation of one instance,</li> <li>save them in a Terraform configuration file and</li> <li>let Terraform automatically repeat it the prescribed number of times.</li> </ul> <p>The plan is to install Terraform, get OpenStack token, enter it into the configuration file and execute. You will then be able to effectively use Terraform within the 3Engines Cloud cloud. For instance, with Terraform you can</p> <ul> <li>automate creation of a multitude of virtual machines, each with their own floating IPs, DNS and network functions or</li> <li>automate creation of Kubernetes clusters</li> </ul> <p>and so on.</p>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#what-we-are-going-to-do","title":"What We Are Going To Do\ud83d\udd17","text":"<ul> <li>Install Terraform as a root user</li> <li>Reconnect to the cloud</li> <li>Download OpenStack token</li> <li>Set up the configuration file and initialize Terraform</li> <li>Create Terraform code</li> <li>Explain the meaning of the variables used</li> <li>Execute the Terraform script</li> </ul>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com. In particular, you will need the password for the account so have it ready in advance.</p> <p>No. 2 Installed version of Linux</p> <p>You can use your current Linux installation, however, in this article we shall start with a clean slate. Create a new VM with Ubuntu as defined in this article:</p> <p>How to create a Linux VM and access it from Linux command line on 3Engines Cloud.</p> <p>No. 3 Installed OpenStackClient for Linux</p> <p>To get token from the cloud, you will first need to enable access from the Ubuntu VM you just created:</p> <p>How to install OpenStackClient for Linux on 3Engines Cloud</p> <p>It will show you how to install Python, create and activate a virtual environment, and then connect to the cloud by downloading and activating the proper RC file from the 3Engines Cloud cloud.</p> <p>No. 4 Connect to the cloud via an RC file</p> <p>Another article, How to activate OpenStack CLI access to 3Engines Cloud cloud using one- or two-factor authentication, deals with connecting to the cloud and is covering either of the one- or two-factor authentication procedures that are enabled on your account. It also covers all the main platforms: Linux, MacOS and Windows.</p> <p>You will use both the Python virtual environment and the downloaded RC file after Terraform has been installed.</p>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#step-1-install-terraform-as-a-root-user","title":"Step 1 Install Terraform as a root user\ud83d\udd17","text":"<p>Install the required dependencies using the following command:</p> <pre><code>sudo apt-get install wget curl unzip software-properties-common gnupg2 -y\n</code></pre> <p>Download and add the HashiCorp signed gpg keys to your system. To perform this action, first enter root mode:</p> <pre><code>sudo su # Enter root mode\ncurl -fsSL https://apt.releases.hashicorp.com/gpg | apt-key add -\n</code></pre> <p>Add the HashiCorp repository to the APT:</p> <pre><code>sudo apt-add-repository \"deb [arch=$(dpkg --print-architecture)] https://apt.releases.hashicorp.com $(lsb_release -cs) main\"\n</code></pre> <p></p> <p>The following commands will update Ubuntu, install Terraform and check its version:</p> <pre><code>apt-get update -y #update Ubuntu\napt-get install terraform -y # install Terraform\nterraform -v # check the version\n</code></pre> <p>Now exit root mode and become the standard eouser again.</p> <pre><code>su eouser # Exit root mode\n</code></pre>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#step-2-reconnect-to-the-cloud","title":"Step 2 Reconnect to the cloud\ud83d\udd17","text":"<p>Working through Prerequisites Nos. 2 and 3, you ended up being connected up to the cloud. That connection is now lost because you have switched to root user and back again, to the normal eouser for the 3Engines Cloud cloud. Refer to Prerequisite No. 4 Activate the RC file to reconnect to the cloud again. The following command will act as a test:</p> <pre><code>openstack flavor list\n</code></pre> <p>and should present a start of a list of flavors available in the system:</p> <p></p> <p>You are now ready to receive token from the cloud you are working with. The \u201ctoken\u201d is actually a very long string of characters which serves as kind of password for your code.</p>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#step-3-download-openstack-token","title":"Step 3 Download OpenStack token\ud83d\udd17","text":"<p>Get token with the following command:</p> <pre><code>openstack token issue -f shell -c id\n</code></pre> <p>This is the result:</p> <pre><code>id=\"gAAAAABj1VTWP_CFhfKv4zWVH7avFUnHYf5J4TvuKG_Md1EdSpBIBZqTVErqVNWCnO-kYq9D7fi33aRCABadsp23-e-lrDFwyZGkfv-d83UkOTsoIuWogupmwx-3gr4wPcsikBvkAMMBD0-XMIkUONAPst6C35QnztSzZmVSeuXOJ33DaGr6yWbY-tNAOpNsk0C9c13U6ROI\"\n</code></pre> <p>Value of variable id is the token you need. Copy and save it so that you can enter it into the configuration file for Terraform.</p>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#step-4-set-up-the-configuration-file-and-initialize-terraform","title":"Step 4 Set up the configuration file and initialize Terraform\ud83d\udd17","text":"<p>Create new directory where your Terraform files will be stored and switch to it:</p> <pre><code>mkdir terraform-dir # Name it as you want\ncd terraform-dir\n</code></pre> <p>Create configuration file, yourconffile.tf, and open it in text editor. Here we use nano:</p> <pre><code>sudo nano yourconffile.tf # Name it as you want\n</code></pre> <p>Paste the following into the file:</p> <pre><code># Configure the OpenStack Provider\n terraform {\n required_providers {\n openstack = {\n source = \"terraform-provider-openstack/openstack\"\n }\n }\n }\n</code></pre> <p>Save the file (for Nano, use Ctrl-X and Y).</p> <p>These commands inform Terraform it will work with OpenStack.</p> <p>Use the following command to initialize Terraform:</p> <pre><code>terraform init\n</code></pre> <p>Terraform will read yourconffile.tf file from the current folder. The actual name does not matter as long as it is the only .tf file in the folder.</p> <p>You can, of course, use many other .tf files such as</p> <ul> <li>main.tf for the main Terraform program,</li> <li>variable.tf to define variables</li> </ul> <p>and so on.</p> <p>The screen after initialization would look like this:</p> <p></p> <p>Terraform has been initialized and is working properly with your OpenStack cloud. Now add code to perform some useful tasks.</p> <p>Note</p> <p>In examples that follow, we use two networks, one with name starting with cloud_ and the name of the other starting with eodata_. The former network should always be present in the account, but the latter may or may not present. If you do not have network which name starts with eodata_, you may create it or use any other network that you already have and want to use.</p>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#step-5-create-terraform-code","title":"Step 5 Create Terraform code\ud83d\udd17","text":"<p>Append code to the contents of the yourconffile.tf. It will generate four virtual machines as specified in the value of variable count. The entire file yourconffile.tf should now look like this:</p> <pre><code># Configure the OpenStack Provider\nterraform {\n required_providers {\n openstack = {\n source = \"terraform-provider-openstack/openstack\"\n }\n }\n}\n\nprovider \"openstack\" {\n user_name = \"[email\u00a0protected]\"\n tenant_name = \"cloud_00aaa_1\"\n auth_url = \"https://keystone.3Engines.com:5000/v3\"\n domain_name = \"cloud_00aaa_1\"\n token = \"gAAAAABj1VTWP_CFhfKv4zWVH7avFUnHYf5J4TvuKG_Md1EdSpBIBZqTVErqVNWCnO-kYq9D7fi33aRCABadsp23-e-lrDFwyZGkfv-d83UkOTsoIuWogupmwx-3gr4wPcsikBvkAMMBD0-XMIkUONAPst6C35QnztSzZmVSeuXOJ33DaGr6yWbY-tNAOpNsk0C9c13U6ROI\"\n }\n\nresource \"openstack_compute_instance_v2\" \"test-terra\" {\ncount = 4\nname = \"test-instance-${count.index}\"\nimage_id = \"d7ba6aa0-d5d8-41ed-b29b-3f5336d87340\"\nflavor_id = \"eo2.medium\"\nsecurity_groups = [\n\"default\", \"allow_ping_ssh_icmp_rdp\" ]\nnetwork {\nname = \"eodata_00aaa_3\"\n}\nnetwork {\nname = \"cloud_00aaa_3\"\n}\n}\n</code></pre>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#always-use-the-latest-value-of-image-id","title":"Always use the latest value of image id\ud83d\udd17","text":"<p>From time to time, the default images of operating systems in the 3Engines Cloud cloud are upgraded to the new versions. As a consequence, their image id will change. Let\u2019s say that the image id for Ubuntu 20.04 LTS was 574fe1db-8099-4db4-a543-9e89526d20ae at the time of writing of this article. While working through the article, you would normally take the current value of image id, and would use it to replace 574fe1db-8099-4db4-a543-9e89526d20ae throughout the text.</p> <p>Now, suppose you wanted to automate processes under OpenStack, perhaps using Heat, Terraform, Ansible or any other tool for OpenStack automation; if you use the value of 574fe1db-8099-4db4-a543-9e89526d20ae for image id, it would remain hardcoded and once this value gets changed during the upgrade, the automated process may stop to execute.</p> <p>Warning</p> <p>Make sure that your automation code is using the current value of an OS image id, not the hardcoded one.</p>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#the-meaning-of-the-variables-used","title":"The meaning of the variables used\ud83d\udd17","text":"<p>The meaning of the variables used is as follows:</p> user_name User name with which you log in into the 3Engines Cloud account. You can use email address here as well. tenant_name Starts with cloud_00. You can see it in the upper left corner of the Horizon window. domain_name If you have only one project in the domain, this will be identical to the tenant_name from above. token The id value you got from command openstack token issue. count How many times to repeat the operation (in this case, four new virtual machines to create) name The name of each VM; here it is differentiated by adding an ordinal number at the end of the name, for example, test-instance-1, test-instance-0, test-instance-2, test-instance-3. image_id The name or ID code for an operating systems image you get with command Compute -&gt; Images. For example, if you choose Ubuntu 20.04 LTS image, its ID is d7ba6aa0-d5d8-41ed-b29b-3f5336d87340. flavor_id Name of the flavor that each VM will have. You get these names from command openstack flavor list. security_groups Here, it is an array of two security groups \u2013 default and allow_ping_ssh_icmp_rdp. These are the basic security groups that should be used as a start for all VMs. network Name of the network to use. In this case, we include network eodata_00aaa_3 for eodata and cloud_00aaa_3 for general communication within the cloud."},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#step-6-execute-the-terraform-script","title":"Step 6 Execute the Terraform script\ud83d\udd17","text":"<p>Here is how Terraform will create four instances of Ubuntu 20.04 LTS. Command apply will execute the script; when asked for confirmation to proceed, type yes to start the operation:</p> <pre><code>terraform apply\n</code></pre> <p></p> <p>Type</p> <pre><code>yes\n</code></pre> <p>It will create four VMs as defined by variable count.</p> <p>You should see output similar to this:</p> <p></p> <p>This is how you would see those virtual machines in Horizon:</p> <p></p> <p>If you wanted to revert the actions, that is, delete the VMs you have just created, the command would be:</p> <pre><code>terraform destroy\n</code></pre> <p>Again, type yes to start the operations.</p>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Of particular interest would be the following CLI commands for Terraform:</p> plan Shows what changes Terraform is going to apply for you to approve. validate Check whether the configuration is valid. show Show the current state or a saved plan. <p>Use command</p> <pre><code>terraform -help\n</code></pre> <p>to learn other commands Terraform can offer.</p>"},{"location":"openstackdev/Generating-and-authorizing-Terraform-using-Keycloak-user-on-3Engines-Cloud.html.html#what-to-do-next_1","title":"What To Do Next\ud83d\udd17","text":"<p>Article How to create a set of VMs using OpenStack Heat Orchestration on 3Engines Cloud uses orchestration capabilities of OpenStack to automate creation of virtual machines. It is a different approach compared to Terraform but both can lead to automation under OpenStack.</p>"},{"location":"openstackdev/openstackdev.html.html","title":"OpenStack Development","text":""},{"location":"openstackdev/openstackdev.html.html#available-documentation","title":"Available Documentation","text":"<ul> <li>Authenticating with OpenstackSDK using Keycloak Credentials on 3Engines Cloud</li> <li>Generating and authorizing Terraform using Keycloak user on 3Engines Cloud</li> </ul>"},{"location":"releasenotes/releasenotes.html.html","title":"RELEASE NOTES\ud83d\udd17","text":""},{"location":"releasenotes/releasenotes.html.html#release-notes_1","title":"Release Notes","text":"<ul> <li>Release Notes</li> </ul>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html","title":"Bucket sharing using s3 bucket policy on 3Engines Cloud\ud83d\udd17","text":""},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#s3-bucket-policy","title":"S3 bucket policy\ud83d\udd17","text":"<p>Ceph - the Software Defined Storage used in 3Engines Cloud cloud, providing object storage compatibility with a subset of Amazon S3 API. Bucket policy in Ceph is part of the S3 API and allows for a selective access sharing to object storage buckets between users of different projects, in the same cloud.</p>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#naming-conventions-used-in-this-document","title":"Naming conventions used in this document\ud83d\udd17","text":"Bucket Owner OpenStack tenant who created an object storage bucket in their project, intending to share to their bucket or a subset of objects in the bucket to another tenant in the same cloud. Bucket User OpenStack tenant who wants to gain access to a Bucket Owner\u2019s object storage bucket. Bucket Owner\u2019s Project A project in which a shared bucket is created. Bucket User\u2019s Project A project which gets access to Bucket Owner\u2019s object storage bucket. Tenant Admin A tenant\u2019s administrator user who can create OpenStack projects and manage users and roles within their domain. <p>In code examples, values typed in all-capital letters, such as BUCKET_OWNER_PROJECT_ID, are placeholders which should be replaced with actual values matching your use-case.</p>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#limitations","title":"Limitations\ud83d\udd17","text":"<p>It is possible to grant access at the project level only, not at the user level. In order to grant access to an individual user, Bucket User\u2019s Tenant Admin must create a separate project within their domain, which only selected users will be granted access to.</p> <p>Ceph S3 implementation</p> <ul> <li>supports the following S3 actions by setting bucket policy but</li> <li>does not support user, role or group policies.</li> </ul>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#s3cmd-configuration","title":"S3cmd CONFIGURATION\ud83d\udd17","text":"<p>To share bucket using S3 bucket policy you have to configure s3cmd first using this tutorial How to access private object storage using S3cmd or boto3 on 3Engines Cloud</p>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#declaring-bucket-policy","title":"Declaring bucket policy\ud83d\udd17","text":"<p>Important</p> <p>The code in this article will work only if the value of Version parameter is</p> <pre><code>\"Version\": \"2012-10-17\",\n</code></pre>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#policy-json-files-sections","title":"Policy JSON file\u2019s sections\ud83d\udd17","text":"<p>Bucket policy is declared using a JSON file. It can be created using editors such as vim or nano. Here is an example policy JSON template:</p> <pre><code>{\n \"Id\": \"POLICY_ID\",\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"STATEMENT_NAME\",\n \"Action\": [\n \"s3:ACTION_1\",\n \"s3:ACTION_2\"\n ],\n \"Effect\": \"EFFECT\",\n \"Resource\": \"arn:aws:s3:::KEY_SPECIFICATION\",\n \"Condition\": {\n \"CONDITION_1\": {\n }\n },\n \"Principal\": {\n \"AWS\": [\n \"arn:aws:iam::PROJECT_ID:root\"\n ]\n }\n }\n ]\n }\n</code></pre> POLICY_ID ID of your policy. STATEMENT_NAME Name of your statement. ACTION Actions that you grant access to bucket user to perform on the bucket. PROJECT_ID Project ID"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#list-of-actions","title":"List of actions\ud83d\udd17","text":"<pre><code>s3:AbortMultipartUpload\ns3:CreateBucket\ns3:DeleteBucketPolicy\ns3:DeleteBucket\ns3:DeleteBucketWebsite\ns3:DeleteObject\ns3:DeleteObjectVersion\ns3:GetBucketAcl\ns3:GetBucketCORS\ns3:GetBucketLocation\ns3:GetBucketPolicy\ns3:GetBucketRequestPayment\ns3:GetBucketVersioning\ns3:GetBucketWebsite\ns3:GetLifecycleConfiguration\ns3:GetObjectAcl\ns3:GetObject\ns3:GetObjectTorrent\ns3:GetObjectVersionAcl\ns3:GetObjectVersion\ns3:GetObjectVersionTorrent\ns3:ListAllMyBuckets\ns3:ListBucketMultiPartUploads\ns3:ListBucket\ns3:ListBucketVersions\ns3:ListMultipartUploadParts\ns3:PutBucketAcl\ns3:PutBucketCORS\ns3:PutBucketPolicy\ns3:PutBucketRequestPayment\ns3:PutBucketVersioning\ns3:PutBucketWebsite\ns3:PutLifecycleConfiguration\ns3:PutObjectAcl\ns3:PutObject\ns3:PutObjectVersionAcl\n</code></pre>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#key_specification","title":"KEY_SPECIFICATION\ud83d\udd17","text":"<p>It defines a bucket and its keys/objects. For example:</p> <pre><code>\"arn:aws:s3:::*\" - the bucket and all of its objects\n\"arn:aws:s3:::MY_SHARED_BUCKET/*\" - all objects of mybucket\n\"arn:aws:s3:::MY_SHARED_BUCKET/myfolder/*\" - all objects which are subkeys to myfolder in mybucket\n</code></pre>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#conditions","title":"Conditions\ud83d\udd17","text":"<p>Additional conditions to filter access to the bucket. For example you can grant access to the specific IP Address using:</p> <pre><code>\"Condition\": {\n \"IpAddress\": {\n \"aws:SourceIp\": \"USER_IP_ADRESS/32\"\n }\n}\n</code></pre> <p>or, alternatively, you can permit access to the specific IP using:</p> <pre><code>\"Condition\": {\n \"NotIpAddress\": {\n \"aws:SourceIp\": \"PERMITTED_USER_IP_ADRESS/32\"\n }\n }\n</code></pre>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#setting-a-policy-on-the-bucket","title":"SETTING A POLICY ON THE BUCKET\ud83d\udd17","text":"<p>The policy may be set on a bucket using:</p> <pre><code>s3cmd setpolicy POLICY_JSON_FILE s3://MY_SHARED_BUCKET command.\n</code></pre> <p>To check policy on a bucket, use the following command:</p> <pre><code>s3cmd info s3://MY_SHARED_BUCKET\n</code></pre> <p>The policy may be deleted from the bucket using:</p> <pre><code>s3cmd delpolicy s3://MY_SHARED_BUCKET\n</code></pre>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#sample-scenarios","title":"Sample scenarios\ud83d\udd17","text":""},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#1-grant-readwrite-access-to-a-bucket-user-using-his-project_id","title":"1 Grant read/write access to a Bucket User using his PROJECT_ID\ud83d\udd17","text":"<p>A Bucket Owner wants to grant a bucket a read/write access to a Bucket User, using his PROJECT_ID:</p> <pre><code>{\n \"Version\": \"2012-10-17\",\n \"Id\": \"read-write\",\n \"Statement\": [\n {\n \"Sid\": \"project-read-write\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": [\n \"arn:aws:iam::BUCKET_OWNER_PROJECT_ID:root\",\n \"arn:aws:iam::BUCKET_USER_PROJECT_ID:root\"\n ]\n },\n \"Action\": [\n \"s3:ListBucket\",\n \"s3:PutObject\",\n \"s3:DeleteObject\",\n \"s3:GetObject\"\n ],\n \"Resource\": [\n \"arn:aws:s3:::*\"\n ]\n }\n ]\n}\n</code></pre> <p>Let\u2019s assume that the file with this policy is named \u201cread-write-policy.json\u201d. To apply it, Bucket Owner should issue:</p> <pre><code>s3cmd setpolicy read-write-policy.json s3://MY_SHARED_BUCKET\n</code></pre> <p>Then, to access the bucket, for example list the bucket, Bucket User should issue:</p> <pre><code>s3cmd ls s3://MY_SHARED_BUCKET\n</code></pre>"},{"location":"s3/Bucket-sharing-using-s3-bucket-policy-on-3Engines-Cloud.html.html#2-limit-readwrite-access-to-a-bucket-to-users-accessing-from-specific-ip-address-range","title":"2 \u2013 Limit read/write access to a Bucket to users accessing from specific IP address range\ud83d\udd17","text":"<p>A Bucket Owner wants to grant read/write access to Bucket Users which access the bucket from specific IP ranges.</p> <p>(In this case, we are setting AWS to \u201c*\u201d which will theoretically grant access to every Project in 3Engines Cloud, then however we are going to filter access to only one IP)</p> <pre><code>{\n \"Id\": \"Policy1654675551882\",\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"Stmt1654675545682\",\n \"Action\": [\n \"s3:GetObject\",\n \"s3:PutObject\"\n ],\n \"Effect\": \"Allow\",\n \"Resource\": \"arn:aws:s3:::MY_SHARED_BUCKET/*\",\n \"Condition\": {\n \"IpAddress\": {\n \"aws:SourceIp\": \"IP_ADRESS/32\"\n }\n },\n \"Principal\": {\n \"AWS\": [\n \"*\"\n ]\n }\n }\n ]\n}\n</code></pre> <p>Let\u2019s assume that the file with this policy is named \u201cread-write-policy-ip.json\u201d. To apply it, Bucket Owner should issue:</p> <pre><code>s3cmd setpolicy read-write-policy-ip.json s3://MY_SHARED_BUCKET\n</code></pre>"},{"location":"s3/Configuration-files-for-s3cmd-command-on-3Engines-Cloud.html.html","title":"Configuration files for s3cmd command on 3Engines Cloud\ud83d\udd17","text":"<p>s3cmd can access remote data using the S3 protocol. This includes EODATA repository and object storage on the 3Engines Cloud cloud.</p> <p>To connect to S3 storage, s3cmd uses several parameters, such as an access key, secret key, S3 endpoint, and others. During configuration, you can enter this data interactively, and the command saves it into a configuration file. This file can then be passed to s3cmd when issuing commands using the connection described within.</p> <p>If you want to use multiple connections from a single virtual machine (such as connecting both to the EODATA repository and to object storage on 3Engines Cloud cloud), you can create and store multiple configuration files \u2014 one per connection.</p> <p>This article provides examples of how to create and save these configuration files under various circumstances and describes some potential problems you may encounter.</p> <p>The examples are not intended to be executed sequentially as part of a workflow; instead, they illustrate different use cases of s3cmd operations.</p> <p>What We Are Going To Cover</p>"},{"location":"s3/How-To-Install-boto3-In-Windows-on-3Engines-Cloud.html.html","title":"How to Install Boto3 in Windows on 3Engines Cloud\ud83d\udd17","text":"<p>boto3 library for Python serves for listing and downloading items from specified bucket or repository. In this article, you will install it in a Windows system.</p>"},{"location":"s3/How-To-Install-boto3-In-Windows-on-3Engines-Cloud.html.html#step-1-ensure-that-python3-is-preinstalled","title":"Step 1 Ensure That Python3 is Preinstalled\ud83d\udd17","text":"<p>On a Desktop Windows System</p> <p>To run boto3, you need to have Python preinstalled. If you are running Windows on a desktop computer, the first step of this article shows how to do it: How to install OpenStackClient GitBash for Windows on 3Engines Cloud.</p> <p>On a Virtual Machine Running in 3Engines Cloud Cloud</p> <p>Virtual machines created in the 3Engines Cloud cloud will have Python3 already preinstalled. If you want to spawn your own Windows VM, two steps will be involved:</p> <ol> <li> <p>Log into your 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> </li> <li> <p>Use or create a new instance in the cloud. See article: Connecting to a Windows VM via RDP through a Linux bastion host port forwarding on 3Engines Cloud.</p> </li> </ol>"},{"location":"s3/How-To-Install-boto3-In-Windows-on-3Engines-Cloud.html.html#step-2-install-boto3-on-windows","title":"Step 2 Install boto3 on Windows\ud83d\udd17","text":"<p>In order to install boto3 on Windows:</p> <ul> <li>Log in as administrator.</li> <li>Click on the Windows icon in the bottom left of your Desktop.</li> <li>Find Command prompt by entering cmd abbreviation.</li> </ul> <p></p> <p>Verify that you have up-to-date Python installed by entering \u201cpython -V\u201d.</p> <p></p> <p>Then install boto3 with the following command:</p> <pre><code>pip install boto3\n</code></pre> <p></p> <p>Verify your installation, with command:</p> <pre><code>pip show boto3\n</code></pre> <p></p>"},{"location":"s3/How-to-access-object-storage-from-3Engines-Cloud-using-boto3.html.html","title":"How to access object storage from 3Engines Cloud using boto3\ud83d\udd17","text":"<p>In this article, you will learn how to access object storage from 3Engines Cloud using Python library boto3.</p> <p>What We Are Going To Cover</p>"},{"location":"s3/How-to-access-object-storage-from-3Engines-Cloud-using-s3cmd.html.html","title":"How to access object storage from 3Engines Cloud using s3cmd\ud83d\udd17","text":"<p>In this article, you will learn how to access object storage from 3Engines Cloud on Linux using s3cmd, without mounting it as a file system. This can be done on a virtual machine on 3Engines Cloud cloud or on a local Linux computer.</p>"},{"location":"s3/How-to-access-object-storage-from-3Engines-Cloud-using-s3cmd.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Object storage vs. standard file system</li> <li>Terminology: container and bucket</li> <li>Configuring s3cmd</li> <li>S3 paths in s3cmd</li> <li>Listing containers</li> <li>Creating a container</li> <li>Uploading a file to a container</li> <li>Listing files and directories of the root directory of a container</li> <li>Listing files and directories not in the root directory of a container</li> <li>Removing a file from a container</li> <li>Downloading a file from a container</li> <li>Checking how much storage is being used on a container</li> <li>Removing the entire container</li> </ul>"},{"location":"s3/How-to-access-object-storage-from-3Engines-Cloud-using-s3cmd.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Generated EC2 credentials</p> <p>You need generate EC2 credentials. Learn more here: How to generate and manage EC2 credentials on 3Engines Cloud</p> <p>No. 3 A Linux computer or virtual machine</p> <p>You need a Linux virtual machine or local computer. This article was written for Ubuntu 22.04. Other operating systems might work, but are out of scope of this article and might require adjusting of commands.</p> <p>If you want to use a virtual machine hosted on 3Engines Cloud cloud and you don\u2019t have it yet, one of these articles can help:</p>"},{"location":"s3/How-to-access-private-object-storage-using-S3cmd-or-boto3-on-3Engines-Cloud.html.html","title":"How to access private object storage using S3cmd or boto3 on 3Engines Cloud\ud83d\udd17","text":"<p>LEGACY ARTICLE</p> <p>This article is marked as a legacy document and may not reflect the latest information. Please refer to the following articles:</p> <p>How to access object storage from 3Engines Cloud using boto3</p> <p>How to access object storage from 3Engines Cloud using s3cmd</p> <p>Introduction</p> <p>Private object storage (buckets within user\u2019s project) can be used in various ways. For example, to access files located in object storage, buckets can be mounted and used as a file system using s3fs. Other tools which can be used to achieve better performance are S3cmd (command line tool) and boto3 (AWS SDK for Python).</p> <p>S3cmd</p> <p>In order to acquire access to Object Storage buckets via S3cmd, first you have to generate your own EC2 credentials with this tutorial How to generate and manage EC2 credentials on 3Engines Cloud.</p> <p>Once EC2 credentials are generated, ensure that your instance or local machine is equipped with S3cmd:</p> <pre><code>s3cmd --version\n</code></pre> <p>If not, S3cmd can be installed with:</p> <pre><code>apt install s3cmd\n</code></pre> <p>Now S3cmd can be configured with the following command:</p> <pre><code>s3cmd --configure\n</code></pre> <p>Input and confirm (by pressing Enter) the following values:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>New settings:\nAccess Key: (your EC2 credentials)\nSecret Key: (your EC2 credentials)\nDefault Region: default\nS3 Endpoint: s3.waw4-1.3Engines.com\nDNS-style bucket+hostname:port template for accessing a bucket: s3.waw4-1.3Engines.com\nEncryption password: (your password)\nPath to GPG program: /usr/bin/gpg\nUse HTTPS protocol: Yes\nHTTP Proxy server name:\nHTTP Proxy server port: 0\n</code></pre> <pre><code>New settings:\nAccess Key: (your EC2 credentials)\nSecret Key: (your EC2 credentials)\nDefault Region: waw3-1\nS3 Endpoint: s3.waw3-1.3Engines.com\nDNS-style bucket+hostname:port template for accessing a bucket: s3.waw3-1.3Engines.com\nEncryption password: (your password)\nPath to GPG program: /usr/bin/gpg\nUse HTTPS protocol: Yes\nHTTP Proxy server name:\nHTTP Proxy server port: 0\n</code></pre> <pre><code>New settings:\nAccess Key: (your EC2 credentials)\nSecret Key: (your EC2 credentials)\nDefault Region: default\nS3 Endpoint: s3.waw3-2.3Engines.com\nDNS-style bucket+hostname:port template for accessing a bucket: s3.waw3-2.3Engines.com\nEncryption password: (your password)\nPath to GPG program: /usr/bin/gpg\nUse HTTPS protocol: Yes\nHTTP Proxy server name:\nHTTP Proxy server port: 0\n</code></pre> <pre><code>New settings:\nAccess Key: (your EC2 credentials)\nSecret Key: (your EC2 credentials)\nDefault Region: default\nS3 Endpoint: s3.fra1-2.3Engines.com\nDNS-style bucket+hostname:port template for accessing a bucket: s3.fra1-2.3Engines.co\nEncryption password: (your password)\nPath to GPG program: /usr/bin/gpg\nUse HTTPS protocol: Yes\nHTTP Proxy server name:\nHTTP Proxy server port: 0\n</code></pre> <p>After this operation, you should be allowed to list and access your Object Storage.</p> <p>List your buckets with:</p> <pre><code>eouser@vm01:$ s3cmd ls\n2022-02-02 22:22 s3://bucket\n</code></pre> <p>To see available commands for S3cmd, type the following command:</p> <pre><code>s3cmd -h\n</code></pre> <p>boto3</p> <p>Warning</p> <p>We strongly recommend using virtualenv for isolating python packages. Configuration tutorial is this: How to install Python virtualenv or virtualenvwrapper on 3Engines Cloud</p> <p>If virtualenv is activated:</p> <pre><code>(myvenv) eouser@vm01:~$ pip3 install boto3\n</code></pre> <p>Or if we install the package globally:</p> <pre><code>eouser@vm01:~$ sudo pip3 install boto3\n</code></pre> <p>Simple script for accessing your private bucket:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>import boto3\n\ndef boto3connection(access_key,secret_key,bucketname):\nhost='https://s3.waw4-1.3Engines.com'\ns3=boto3.resource('s3',aws_access_key_id=access_key,\naws_secret_access_key=secret_key, endpoint_url=host,)\n\nbucket=s3.Bucket(bucketname)\nfor obj in bucket.objects.filter():\n print('{0}:{1}'.format(bucket.name, obj.key))\n\n#For Python3\nx = input('Enter your access key:')\ny = input('Enter your secret key:')\nz = input('Enter your bucket name:')\n\nboto3connection(x,y,z)\n</code></pre> <pre><code>import boto3\n\ndef boto3connection(access_key,secret_key,bucketname):\nhost='https://s3.waw3-1.3Engines.com'\ns3=boto3.resource('s3',aws_access_key_id=access_key,\naws_secret_access_key=secret_key, endpoint_url=host,)\n\nbucket=s3.Bucket(bucketname)\nfor obj in bucket.objects.filter():\n print('{0}:{1}'.format(bucket.name, obj.key))\n\n#For Python3\nx = input('Enter your access key:')\ny = input('Enter your secret key:')\nz = input('Enter your bucket name:')\n\nboto3connection(x,y,z)\n</code></pre> <pre><code>import boto3\n\ndef boto3connection(access_key,secret_key,bucketname):\nhost='https://s3.waw3-2.3Engines.com'\ns3=boto3.resource('s3',aws_access_key_id=access_key,\naws_secret_access_key=secret_key, endpoint_url=host,)\n\nbucket=s3.Bucket(bucketname)\nfor obj in bucket.objects.filter():\n print('{0}:{1}'.format(bucket.name, obj.key))\n\n#For Python3\nx = input('Enter your access key:')\ny = input('Enter your secret key:')\nz = input('Enter your bucket name:')\n\nboto3connection(x,y,z)\n</code></pre> <pre><code>import boto3\n\ndef boto3connection(access_key,secret_key,bucketname):\nhost='https://s3.fra1-2.3Engines.com'\ns3=boto3.resource('s3',aws_access_key_id=access_key,\naws_secret_access_key=secret_key, endpoint_url=host,)\n\nbucket=s3.Bucket(bucketname)\nfor obj in bucket.objects.filter():\n print('{0}:{1}'.format(bucket.name, obj.key))\n\n#For Python3\nx = input('Enter your access key:')\ny = input('Enter your secret key:')\nz = input('Enter your bucket name:')\n\nboto3connection(x,y,z)\n</code></pre> <p>Save your file with .py extension and execute the following command in the terminal:</p> <pre><code>python3 &lt;filename.py&gt;\n</code></pre> <p>Enter the access key, secret key and bucket name. If everything is correct, you should see output in the following format: :."},{"location":"s3/How-to-delete-large-S3-bucket-on-3Engines-Cloud.html.html","title":"How to Delete Large S3 Bucket on 3Engines Cloud\ud83d\udd17","text":"<p>Introduction</p> <p>Due to an openstack-cli limitation, removing S3 buckets with more then 10 000 objects will fail when using the command:</p> <pre><code>openstack container delete --recursive &lt;&lt;bucket_name&gt;&gt;\n</code></pre> <p>showing the following error error:</p> <pre><code>Conflict (HTTP 409) (Request-ID: tx00000000000001bb5e8e5-006135c488-35bc5d520-dias_default) clean_up DeleteContainer: Conflict (HTTP 409) (Request-ID:)\n</code></pre> <p>Recommended solution</p> <p>To delete a large S3 bucket we can use s3cmd.</p> <p>In order to acquire access to your Object Storage buckets via s3cmd, first you have to generate your own EC2 credentials with the following tutorial: How to generate and manage EC2 credentials on 3Engines Cloud</p> <p>After that, you have to configure s3cmd as explained in the following article: How to access private object storage using S3cmd or boto3 on 3Engines Cloud</p> <p>After this, you should be able to list and access your Object Storage.</p> <p>List your buckets with the command:</p> <pre><code>eouser@vm01:$ s3cmd ls\n2022-02-02 22:22 s3://large-bucket\n</code></pre> <p>Now you\u2019re able to delete your large bucket with the command presented below, where -r means recursive removal.</p> <pre><code>s3cmd rb -r s3://large-bucket\n</code></pre> <p>The bucket itself and all the files inside will be removed.</p> <pre><code>WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...\ndelete: 's3://large-bucket/example_file.jpg'\ndelete: 's3://large-bucket/example_file.txt'\ndelete: 's3://large-bucket/example_file.png'\n...\n...\n...\nBucket 's3://large-bucket/' removed\n</code></pre> <p>Your large bucket has been successfully removed and the list of buckets is empty.</p> <pre><code>eouser@vm01:$ s3cmd ls\neouser@vm01:$\n</code></pre>"},{"location":"s3/How-to-install-s3cmd-on-Linux-on-3Engines-Cloud.html.html","title":"How to install s3cmd on Linux on 3Engines Cloud\ud83d\udd17","text":"<p>In this article you will learn how to install s3cmd on Linux. s3cmd can be used, among other things, to:</p> <ul> <li>download files from EODATA repositories as well as to</li> <li>store files in object storage available on 3Engines Cloud,</li> </ul> <p>without mounting these resources as a file system.</p>"},{"location":"s3/How-to-install-s3cmd-on-Linux-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Installing s3cmd using apt</li> <li>Uninstalling s3cmd</li> </ul>"},{"location":"s3/How-to-install-s3cmd-on-Linux-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 A virtual machine or local computer</p> <p>These instructions are for Ubuntu 22.04, either on a local computer or on a virtual machine hosted on 3Engines Cloud cloud.</p> <p>Other operating systems and environments are outside of scope of this article and might require adjusting of the instructions accordingly.</p> <p>If you want to install s3cmd on a virtual machine hosted on 3Engines Cloud cloud, follow one of these articles:</p>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html","title":"How to Mount Object Storage Container as a File System in Linux Using s3fs on 3Engines Cloud\ud83d\udd17","text":"<p>The following article covers mounting of object storage containers using s3fs on Linux. One of possible use cases is having easy access to content of such containers on different computers and virtual machines. For access, you can use your local Linux computer or virtual machines running on 3Engines Cloud cloud. All users of the operating system should have read, write and execute privileges on contents of these containers.</p>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Installing s3fs</li> <li>Creating a file containing login credentials</li> <li>Creating a mounting point</li> <li>Mounting the container using s3fs</li> <li>Testing whether mounting was successful</li> <li>Dismount a container</li> <li>Configuring automatic mounting</li> <li>Stopping automatic mounting of a container</li> <li>Potential problems with the way s3fs handles objects</li> </ul>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 Machine running Linux</p> <p>You need a machine running Linux. It can be a virtual machine running on 3Engines Cloud cloud or your local Linux computer.</p> <p>This article was written for Ubuntu 22.04. If you are running a different distribution, adjust the commands from this article accordingly.</p> <p>No. 3 Object storage container</p> <p>You need at least one object storage container on 3Engines Cloud cloud. The following article shows how to create one: How to use Object Storage on 3Engines Cloud.</p> <p>As a concrete example, let\u2019s say that the container is named my-files and that it contains two items. This is what it could look like in the Horizon dashboard:</p> <p></p> <p>With the proper s3fs command from this article, you will be able to access that container remotely but through a local file system.</p> <p>No. 4 Generated EC2 credentials</p> <p>You need to have EC2 credentials for your object storage containers generated. The following article will tell you how to do it: How to generate and manage EC2 credentials on 3Engines Cloud.</p> <p>No. 5 Knowledge of the Linux command line</p> <p>Basic knowledge of the Linux command line is required.</p>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#step-1-sign-in-to-your-linux-machine","title":"Step 1: Sign in to your Linux machine\ud83d\udd17","text":"<p>Sign in to an Ubuntu account which has sudo privileges. If you are using SSH to connect to a virtual machine running on 3Engines Cloud cloud, the username will likely be eouser.</p>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#step-2-install-s3fs","title":"Step 2: Install s3fs\ud83d\udd17","text":"<p>First, check if s3fs is installed on your machine. Enter the following command in the terminal:</p> <pre><code>which s3fs\n</code></pre> <p>If s3fs is already installed, the output should contain its location, which could look like this:</p> <pre><code>/usr/local/bin/s3fs\n</code></pre> <p>If the output is empty, it probably means that s3fs is not installed. Update your packages and install s3fs using this command:</p> <pre><code>sudo apt update &amp;&amp; sudo apt upgrade &amp;&amp; sudo apt install s3fs\n</code></pre>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#step-3-create-file-or-files-containing-login-credentials","title":"Step 3: Create file or files containing login credentials\ud83d\udd17","text":"<p>In this article, we are going to use plain text files for storing S3 credentials - access and secret keys. (If you don\u2019t have the credentials yet, follow Prerequisite No. 4.) Each file can store one such pair and can be used to mount all object storage containers to which that key pair provides access.</p> <p>For each key pair you intend to use, create file name and a corresponding text file. The content of the file will be just one line,</p> <ul> <li>starting with access key,</li> <li>followed by a colon,</li> <li>followed by a secret key</li> </ul> <p>from that key pair. If the access key is 1234abcd and the secret key is 4321dcba, the corresponding text file should have the following content:</p> <pre><code>1234abcd:4321dcba\n</code></pre> <p>Change permissions of each file containing a key pair to 600. If such a file is called .passwd-s3fs and is stored in your home directory, the command for changing its permissions would be:</p> <pre><code>chmod 600 ~/.passwd-s3fs\n</code></pre>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#step-4-create-mount-points","title":"Step 4: Create mount points\ud83d\udd17","text":"<p>The files inside your object storage container should appear inside a folder of your choice. Such a folder will be called mount point in this article. You can use an empty folder from your file system for that purpose. You can also create new folder(s) to use as mount points.</p> <p>To keep things tidy, let us use in this example a standard Linux folder called /mnt, which is what system administrators use to mount other file systems. For each container, use the usual mkdir command to create a subfolder of /mnt. For instance:</p> <pre><code>sudo mkdir /mnt/mount-point\n</code></pre>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#step-5-mount-a-container","title":"Step 5: Mount a container\ud83d\udd17","text":"<p>Here is a typical command to mount a container:</p> <pre><code>sudo s3fs my-files /mnt/mount-point -o passwd_file=~/.passwd-s3fs -o url=https://s3.waw3-1.3Engines.com -o endpoint=\"waw3-1\" -o use_path_request_style -o umask=0000 -o allow_other\n</code></pre> <p>It goes without saying that you will need to change some of the parameters involved \u2013 but not all of them. Here is what to change and what to use as prescribed:</p> <p>Edit container name and mount point</p> <ul> <li>my-files is the name of the container</li> <li>/mnt/mount-point is a directory in Linux file system which will be the mount point for that container</li> </ul> <p>Edit key pair location (note it starts with -o)</p> <ul> <li>-o passwd_file - location of the file with the key pair used for mounting that container</li> </ul> <p>Do not edit the following parameters \u2013 just copy and paste verbatim</p> <ul> <li>-o url - the endpoint URL address</li> <li>-o endpoint - the S3 region</li> <li>-o use_path_request_style - parameter that fixes issues with certain characters (such as dots)</li> <li>-o umask - umask which describes permissions for accessing a container - in this case it is read, write and execute</li> <li>-o allow_other - allows access to the container to all users on the system</li> </ul> <p>Once you have executed the command, navigate to the directory in which you mounted the object storage container. If still using folder /mnt/mount-point, the command is:</p> <pre><code>cd /mnt/mount-point\n</code></pre> <p>List contents of the container and be ready to wait a couple of seconds for the operation to be completed:</p> <pre><code>ls\n</code></pre> <p>You should see files from your object storage container.</p> <p>Suppose you mounted object storage container under /mnt/mount-point, which contains</p> <ul> <li>a directory called some-directory and</li> <li>a file named text-file.txt.</li> </ul> <p>This is what executing the ls command from the mount point could produce:</p> <p></p> <p>To mount multiple containers, repeat the s3fs command with relevant parameters, as many times as needed.</p>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#unmounting-a-container","title":"Unmounting a container\ud83d\udd17","text":"<p>To unmount a container, first make sure that the content of your object storage is not in use by any application on your system, including terminals and file managers. After that, execute the following command, replacing /mnt/mount-point with the mount point of your object storage container:</p> <pre><code>sudo umount -lf /mnt/mount-point\n</code></pre>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#configuring-automatic-mounting-of-your-object-storage","title":"Configuring automatic mounting of your object storage\ud83d\udd17","text":"<p>Here is how to configure automatic mounting of your object storage containers after system startup.</p> <p>Check the location under which s3fs is installed on your system:</p> <pre><code>which s3fs\n</code></pre> <p>The output should contain the full location of the s3fs binary on your system. On Ubuntu virtual machines created using default images on 3Engines Cloud cloud, it will likely be:</p> <pre><code>/usr/local/bin/s3fs\n</code></pre> <p>Memorize or write it somewhere down, you will need it later.</p> <p>Open the file /etc/fstab for editing. You will need sudo permissions for that. For example, if you wish to use nano for this purpose, execute the following command:</p> <pre><code>sudo nano /etc/fstab\n</code></pre> <p>Append the following line to it:</p> <pre><code>/usr/local/bin/s3fs#my-files /mnt/mount-point fuse passwd_file=/home/eouser/.passwd-s3fs,_netdev,allow_other,use_path_request_style,uid=0,umask=0000,mp_umask=0000,gid=0,url=https://s3.waw3-1.3Engines.com,region=waw3-1 0 0\n</code></pre> <p>Replace the parameters from that line as follows:</p> <ul> <li>/usr/local/bin/s3fs with the full location of s3fs binary you obtained previously</li> <li>my-files with the name of your object storage container</li> <li>/mnt/mount-point with the full location of the directory which you chose as a mount point</li> <li>/home/eouser/.passwd-s3fs with the full location of the file containing the key pair used to access your object storage container created in Step 3</li> </ul> <p>Append such a line for every container you wish to have automatically mounted.</p> <p>Reboot your VM and check whether the mounting was successful by navigating to each mount point and making sure that the files from those object storage containers are there.</p>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#stopping-automatic-mounting-of-a-container","title":"Stopping automatic mounting of a container\ud83d\udd17","text":"<p>If you no longer want your containers to be automatically mounted, first make sure that each of them is not in use by any application on your system, including terminals and file managers.</p> <p>After that, unmount each container you wish to stop from automatic mounting. Execute the following command - replace /mnt/mount-point with the mount point of your first container - and repeat it for every other such container, if applicable.</p> <pre><code>sudo umount /mnt/mount-point\n</code></pre> <p>Finally, modify the /etc/fstab file.</p> <p>To do that, open that file in your favorite text editor with sudo. If your favorite text editor is nano, use this command:</p> <pre><code>sudo nano /etc/fstab\n</code></pre> <p>Remove the lines responsible for automatic mounting of containers you no longer want to be automatically mounted. If you followed this article, these lines were added while following its Step 6.</p> <p>Save the file and exit the text editor.</p> <p>You can now reboot your virtual machine to check if the containers are indeed no longer being mounted.</p>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#potential-problems-with-the-way-s3fs-handles-objects","title":"Potential problems with the way s3fs handles objects\ud83d\udd17","text":"<p>s3fs attempts to translate object storage to a file system and most of the time is successful. However, sometimes it might not be possible. One of potential problems with s3fs comes from the fact that object storage allows a folder and file to share the same name in the same location, which is outright impossible in normal operating systems.</p> <p>Here is a situation from the Horizon dashboard:</p> <p></p> <p>The first row contains an object named item, with size of 10 bytes. The second row has the object called item labeled in blue color, described as a folder. Both the \u201cfile\u201d and the \u201cfolder\u201d are represented in Horizon to look like regular file and regular folder from Linux or Windows \u2013 except that they are not. In S3 terminology, the former is an object with the name ending in item, while the latter is an object with the name ending in item/ (note it is ending with a slash). Since their names are different, they can coexist in object store based on S3 standard.</p> <p>When above mentioned location is accessed by s3fs, the ls command will return only the folder:</p> <p></p> <p>To prevent this issue, invent and use consistent file system conventions while utilizing object storage.</p> <p>Another potential problem is that some changes to the object storage might not be immediately visible in file system created by s3fs. Wait a bit and double check to see whether that is the case.</p>"},{"location":"s3/How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You can also access object storage from 3Engines Cloud without mounting it as a file system.</p> <p>Check the following articles for more information:</p>"},{"location":"s3/How-to-mount-object-storage-container-from-3Engines-Cloud-as-file-system-on-local-Windows-computer.html.html","title":"How to mount object storage container from 3Engines Cloud as file system on local Windows computer\ud83d\udd17","text":"<p>This article describes how to configure direct access to object storage containers from 3Engines Cloud cloud in This PC window on your local Windows computer. Such containers will be mounted as network drives, for example:</p> <p></p> <p>You will configure mounting using an account which has administrative privileges obtained using UAC (User Account Control). After this process, the container should be also be available on accounts which do not have such administrative privileges.</p>"},{"location":"s3/How-to-mount-object-storage-container-from-3Engines-Cloud-as-file-system-on-local-Windows-computer.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface https://horizon.3Engines.com.</p> <p>No. 2. Object storage container</p> <p>You need at least one object storage container on the 3Engines Cloud cloud. If you do not have one yet, please follow this article: How to use Object Storage on 3Engines Cloud</p> <p>No. 3. Generated EC2 Credentials</p> <p>You need to generate EC2 credentials for your account.</p> <p>The following article contains information how to do it on Linux: How to generate and manage EC2 credentials on 3Engines Cloud.</p> <p>If instead you want to do it on Windows, you will need to install the OpenStack CLI client first. Check one of these articles to learn more.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html","title":"How to use Object Storage on 3Engines Cloud\ud83d\udd17","text":"<p>Object storage on 3Engines Cloud cloud can be used to store your files in containers. In this article, you will create a basic container and perform basic operations on it, using a web browser.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Create a new object storage container</li> <li>Viewing the container</li> <li>Creating a new folder</li> <li>Navigating through folders</li> <li>Uploading a file</li> <li>Deleting files and folders from a container</li> <li>Enabling or disabling public access to object storage containers</li> <li>Using a public link</li> </ul>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://horizon.3Engines.com.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#creating-a-new-object-storage-container","title":"Creating a new object storage container\ud83d\udd17","text":"<p>Login to the Horizon dashboard. Navigate to the following section: Object Store &gt; Containers.</p> <p>You should see a list of object storage containers. By default, it will be empty:</p> <p></p> <p>To create a new object storage container, click the button. You should get the following form:</p> <p></p> <p>Enter the name of your choice for that container in the Container Name text field.</p> <p>In general, bucket names should follow domain name constraints:</p> <p>Warning</p> <p>Bucket names must be unique.</p> <p>Bucket names cannot be formatted as IP address.</p> <p>Bucket names can be between 3 and 63 characters long.</p> <p>Bucket names must not contain uppercase characters or underscores.</p> <p>Bucket names must start with a lowercase letter or number.</p> <p>Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number.</p> <p>Bucket name cannot contain forward slashes (/).</p> <p>Note</p> <p>Single-tenancy vs. multi-tenancy</p> <p>On CREODIAS WAW3-1, WAW3-2 and FRA1-2 clouds, single tenancy is enabled. This means that two object storage containers on it cannot have an identical name. Avoid using common names such as storage or files.</p> <p>In this example, we will use the name file-container for our object storage container. Of course, your name should be different.</p> <p>Section Container Access has two options:</p> Public It will generate a link. Anyone who has it will be able to access files stored on that object storage container, even if not being a member of 3Engines Cloud cloud. Not Public This will not generate a link explained above. The container will only be available from within your project unless you set a bucket sharing policy (not covered in this article). <p>Click on Submit and see new container in the list:</p> <p></p> <p>You may encounter the following error:</p> <p></p> <p>The reason for it might be that you are trying to create an object storage container which has the same name as another container. Try using a different name.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#viewing-the-container","title":"Viewing the container\ud83d\udd17","text":"<p>To view the content of the container, click its name on the list:</p> <p></p> <p>You should see files in the container. Initially, it should be empty. You can now create folders and upload files to this container.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#creating-a-new-folder","title":"Creating a new folder\ud83d\udd17","text":"<p>To create a new folder, click button: . You should get the following form:</p> <p></p> <p>Enter the name for your folder in Folder Name text field. If you use a forward slash, it will create a tree of folders. For example, if you wish to create a folder called place1 and inside that folder another folder called place2, enter the following:</p> <pre><code>place1/place2\n</code></pre> <p>Adding forward slash in the beginning of such directory structure is optional. The folders will be created relative to the directory you are currently in and not to the root directory of your object storage container.</p> <p>Click Create Folder to confirm.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#navigating-through-folders","title":"Navigating through folders\ud83d\udd17","text":"<p>To navigate to another folder on your object storage container, click its name. Folder names will be written in blue and in the Size column, the word Folder will be shown.</p> <p>Section above text field Click here for filters or full text search shows the folder you are currently in. It could, for example, look like this:</p> <p></p> <p>That would be directory another-folder, inside the second-folder directory, which, in turn, is inside the first-folder directory.</p> <p>Click the name of the folder you want to go to.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#uploading-a-file","title":"Uploading a file\ud83d\udd17","text":"<p>To upload a file to your object storage container, click the button. You should get the following window:</p> <p></p> <p>Click Browse\u2026 to open the file browser which can be used to choose a file which you wish to upload. Its look will vary depending on the operating system you are using and other factors.</p> <p>Once you have chosen the file, its name should be written in the File section, for example:</p> <p></p> <p>You can enter the name and location of the file in your object storage container in the File Name text field. This allows you to rename the file which you are uploading and to put it into a different folder. Forward slashes are used to specify the location of your object storage container in folder hierarchy relative to the folder you are currently in. If you enter a name of a folder which does not exist yet, it will be created.</p> <p>If you do not enter anything into File Name text field, the file will be uploaded to the folder you are currently in and it will not be renamed.</p> <p>Once you\u2019re ready, click Upload File. If the upload was successful, you should receive this confirmation:</p> <p></p> <p>For example, let\u2019s assume that you are in the root directory of your object storage container and you want to upload a file called uploaded-file.txt to the directory called first-folder located there. If that is the case, you should enter the following in the File Name text field:</p> <pre><code>first-folder/uploaded-file.txt\n</code></pre> <p>Your file should then be uploaded to that directory:</p> <p></p> <p>Warning</p> <p>Having two files or two folders of the same name in the same directory is impossible. Having a file and folder under the same name in the same directory (extension is considered part of the name here) may lead to problems so it is best to avoid it.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#deleting-files-and-folders-from-a-container","title":"Deleting files and folders from a container\ud83d\udd17","text":""},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#deleting-one-file","title":"Deleting one file\ud83d\udd17","text":"<p>To delete a file from container, open the drop-down menu next to the Download button.</p> <p></p> <p>Click Delete.</p> <p>You should get the following request for confirmation:</p> <p></p> <p>Click Delete to confirm. Your file should be deleted.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#deleting-one-folder","title":"Deleting one folder\ud83d\udd17","text":"<p>If you want to delete a folder and its contents, click the button next to it. You should get the similar request for confirmation as previously. Like before, click Delete to confirm.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#deleting-multiple-files-andor-folders","title":"Deleting multiple files and/or folders\ud83d\udd17","text":"<p>If you want to delete multiple files and/or folders at the same time, use checkboxes on the left of the list to select the ones you want to remove, for example:</p> <p></p> <p>You can also select all files and folders on a page by clicking the checkbox above the folders:</p> <p></p> <p>To delete selected items, click the button to the right of the button used to create new folders. In this case you should also get the similar request for confirmation. Click Delete to confirm.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#recommended-number-of-files-in-your-object-storage-containers","title":"Recommended number of files in your object storage containers\ud83d\udd17","text":"<p>It is recommended that you do not have more than 1 000 000 (one million) files and folders in one object storage container since it will make listing them inefficient. If you want to store a large number of files, use multiple object storage containers for that purpose.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#working-with-public-object-storage-containers","title":"Working with public object storage containers\ud83d\udd17","text":""},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#enabling-or-disabling-public-access-to-object-storage-containers","title":"Enabling or disabling public access to object storage containers\ud83d\udd17","text":"<p>During the creation of your object storage container you had an option to set whether it should be accessible by the public or not. If you wish to change that setting later, first find the name of the container you wish to modify in the container list.</p> <p>The details about that object storage container should appear like on the screenshot below \u2013 if not, click on its name to make it appear:</p> <p></p> <p>Check or uncheck the Public Access checkbox depending on whether you wish to enable or disable such access.</p> <p>If you enabled Public Access, a link to your object storage container will be provided.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#using-a-public-link","title":"Using a public link\ud83d\udd17","text":"<p>Once you have created a public link, enter it into the browser. You should see a list of all files and folders in your container, for example:</p> <p></p> <p>Forward slashes are being used as separators between files and folders in the directory paths.</p> <p>If you want to download a file from the root directory of your container, add its name to the link:</p> <p></p> <p>In this example, Firefox was used to access the file called second-upload-file.txt in the file-container object storage container.</p> <p>If you end a link for downloading a file with forward slash, it will download an empty file instead.</p> <p>To share a link used to download a particular file in another folder, add full location and name of the file within folder structure:</p> <p></p> <p>In this example, a file called another-uploaded-file.txt in the directory called second-folder from the file-container object storage container was accessed.</p> <p>Note that this method cannot be used to download folders.</p> <p>Warning</p> <p>If you share a link to one file from an object storage container, the recipient will be able to create download links for all other files on that object storage container. Obviously, this could be a security risk.</p>"},{"location":"s3/How-to-use-Object-Storage-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>Now that you have created your object storage container you can mount it on the platform of your choice for easier access. There are many ways to do that, for instance:</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html","title":"S3 bucket object versioning on 3Engines Cloud\ud83d\udd17","text":"<p>S3 bucket versioning allows you to keep different versions of the file stored on object storage. Here are some typical use cases:</p> <ul> <li>data recovery and backup</li> <li>accidental deletion protection</li> <li>collaboration and document management</li> <li>application testing and rollbacks</li> <li>change tracking for large datasets</li> <li>file synchronization and archiving</li> </ul> <p>In this article, you will learn how to</p> <ul> <li>set up S3 bucket object versioning on 3Engines Cloud OpenStack</li> <li>download different versions of files and</li> <li>set up automatic removal of previous versions.</li> </ul>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to Horizon interface: https://horizon.3Engines.com.</p> <p>No. 2 AWS CLI installed on your local computer or virtual machine</p> <p>AWS CLI is a free and open source software which can manage different clouds, not only those hosted by Amazon Web Services. In this article, you will use it to control your resources hosted on 3Engines Cloud cloud.</p> <p>This article was written for Ubuntu 22.04. The commands may work on other operating systems, but might require adjusting.</p> <p>Here is how to install AWS CLI on Ubuntu 22.04:</p> <pre><code>sudo apt install awscli\n</code></pre> <p>No. 3 Generated EC2 credentials</p> <p>To authenticate to 3Engines Cloud cloud when using AWS CLI, you need to use EC2 credentials. If you don\u2019t have them yet, check How to generate and manage EC2 credentials on 3Engines Cloud</p> <p>No. 4 Bucket naming rules</p> <p>Over the course of this article, you will create several buckets. Make sure that you know the rules regarding what characters are allowed in bucket names. See section Creating a new object storage container of How to use Object Storage on 3Engines Cloud to learn more.</p> <p>No. 5 Terminology: container vs. bucket</p> <p>In this article, both \u201ccontainer\u201d and \u201cbucket\u201d represent the same category of resources hosted on 3Engines Cloud cloud. The former term is more often used by the Horizon dashboard and the latter term is more often used by AWS CLI.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li> <p>Configuring and testing AWS CLI</p> </li> <li> <p>Configure AWS CLI</p> </li> <li>Verify that AWS CLI is working</li> <li>Assigning bucket names to shell variables</li> </ul> <ul> <li>Making sure that bucket names are unique</li> <li>Creating a bucket without versioning</li> <li>Enabling versioning on a bucket</li> <li>Uploading file</li> <li>S3 paths</li> <li>Uploading another version of a file</li> <li> <p>Listing available versions of a file</p> </li> <li> <p>Example 1: One file, two versions</p> </li> <li>Example 2: Multiple files, multiple versions</li> <li>Downloading a chosen version of a file</li> <li> <p>Deleting objects on version-enabled buckets</p> </li> <li> <p>Setting up a deletion marker</p> </li> <li>Removing deletion marker</li> <li>Permanently removing a file from version-enabled bucket</li> <li> <p>Using lifecycle policy to configure automatic deletion of previous versions of files</p> </li> <li> <p>Preparing the testing environment</p> </li> <li>Setting up automatic removal of previous versions</li> <li>Deleting lifecycle policy</li> <li> <p>Suspending versioning</p> </li> <li> <p>Bucket on which versioning has never been enabled</p> </li> <li>Suspending of versioning</li> </ul>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#configuring-and-testing-aws-cli","title":"Configuring and testing AWS CLI\ud83d\udd17","text":"<p>Now follows how to configure AWS CLI for the first time; if it has been configured before, you might need to adjust the configuration according to the needs of this article.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#step-1-configure-aws-cli","title":"Step 1: Configure AWS CLI\ud83d\udd17","text":"<p>To configure AWS CLI, create a folder called .aws in your home directory:</p> <pre><code>mkdir ~/.aws\n</code></pre> <p>In it, create a text file called credentials</p> <pre><code>touch ~/.aws/credentials\n</code></pre> <p>Navigate to .aws folder:</p> <pre><code>cd ~/.aws\n</code></pre> <p>This is how listing of the contents of your .aws folder could now look like:</p> <p></p> <p>Open file credentials in plain text editor of your choice (like nano or vim). If you are using nano, this is the command:</p> <pre><code>nano credentials\n</code></pre> <p>Enter the following:</p> <pre><code>[default]\naws_access_key_id = &lt;&lt;YOUR_ACCESS_KEY&gt;&gt;\naws_secret_access_key = &lt;&lt;YOUR_SECRET_KEY&gt;&gt;\n</code></pre> <p>Replace &lt;&gt; and &lt;&gt; with your access and secret key, respectively. <p>Save the file and exit the text editor..</p> <p>The commands we provide in this article will have the appropriate endpoint already included, via the --endpoint-url parameter, and all you need to do is to select the command for the cloud that you are using.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#step-2-verify-that-aws-cli-is-working","title":"Step 2: Verify that AWS CLI is working\ud83d\udd17","text":"<p>Execute command list-buckets to list buckets:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-buckets \\\n--endpoint-url https://s3.waw4-1.3Engines.com\n</code></pre> <pre><code>aws s3api list-buckets \\\n--endpoint-url https://s3.waw3-1.3Engines.com\n</code></pre> <pre><code>aws s3api list-buckets \\\n--endpoint-url https://s3.waw3-2.3Engines.com\n</code></pre> <pre><code>aws s3api list-buckets \\\n--endpoint-url https://s3.fra1-2.3Engines.com\n</code></pre> <p>The output should be in JSON format. If you have a bucket named bucket1 and another bucket named bucket2, this is how it could look like:</p> <pre><code>{\n \"Buckets\": [\n {\n \"Name\": \"bucket1\",\n \"CreationDate\": \"2023-11-14T08:55:38.526Z\"\n },\n {\n \"Name\": \"bucket2\",\n \"CreationDate\": \"2024-01-30T10:11:44.157Z\"\n }\n ],\n \"Owner\": {\n \"DisplayName\": \"my-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n}\n</code></pre> <p>Here:</p> <ul> <li>Value of key Buckets should contain a list of your buckets (names and creation dates)</li> <li>Value of key Owner should contain name and ID of your project</li> </ul> <p>Note</p> <p>In this article, colors have been added to make JSON more legible. AWS CLI typically does not output colored text.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#assigning-bucket-names-to-shell-variables","title":"Assigning bucket names to shell variables\ud83d\udd17","text":"<p>To differentiate between different buckets used in various examples of this article, we will use the following shell variables:</p> bucket_name1 used in majority of examples in this article bucket_name2 for uploading file to non-root directory bucket_name3 for using lifecycle policy bucket_name4 bucket on which versioning has never been enabled, used as introduction to suspending of versioning bucket_name5 example of suspending of versioning <p>Choose names for these five buckets. Make sure to follow naming rules from Prerequisite No. 4.</p> <p>Assign the names to these variables, for example:</p> <pre><code>bucket_name1=\"versioning-test\"\nbucket_name2=\"examplebucket\"\nbucket_name3=\"testnoncurrent\"\nbucket_name4=\"no-versioning\"\nbucket_name5=\"anotherbucket\"\n</code></pre> <p>Important</p> <p>The shell variables will be valid as long as your terminal session is active. When you start a new terminal session, you will need to reassign these variables. Therefore, you should write them down somewhere so that you don\u2019t lose them.</p> <p>To use content of a shell variable as an argument of a command, you need to prefix the name of the variable with \\(**. An example command to create a bucket whose name is stored in variable **\\)bucket_name1 (where &lt;&gt; is the URL of the endpoint): <pre><code>aws s3api create-bucket \\\n--endpoint-url &lt;&lt;SOME_URL&gt;&gt; \\\n--bucket $bucket_name1\n</code></pre>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#making-sure-that-bucket-names-are-unique","title":"Making sure that bucket names are unique\ud83d\udd17","text":"<p>If single tenancy is enabled on the cloud you are using, the name of your bucket needs to be unique for the entire cloud. Buckets called versioning-test, examplebucket etc. may well already exist. If that is the case, the output from create-bucket command will look like this:</p> <pre><code>argument of type 'NoneType' is not iterable\n</code></pre> <p>To create unique names, you can, for example, add a unique string to the base bucket name. Consider adding</p> <ul> <li>your initials,</li> <li>date and time</li> <li>a random number</li> <li>a UUID (random string of characters and digits)</li> </ul> <p>or even a combination of these methods.</p> <p>As a practical example, on Ubuntu 22.04, use command called uuidgen to generate a UUID:</p> <pre><code>uuidgen\n</code></pre> <p>The output should contain a UUID:</p> <pre><code>889fa8de-9623-4735-99c7-9f1567e2a965\n</code></pre> <p>Then you can copy this generated number and add it to the bucket name, for instance:</p> <pre><code>bucket_name1=\"versioning-test-889fa8de-9623-4735-99c7-9f1567e2a965\"\n</code></pre> <p>If needed, make sure to repeat this process for each variable.</p> <p>The best course of action is to store bucket names somewhere safe as there are two scenarios possible:</p> Reusing the existing buckets Maybe the terminal was rebooted at some point but you want to continue working through the article. Or, you have previously used the article to create several buckets and now you want to continue using them. Avoid using the existing buckets Go through the article without previous baggage, using a \u201cclean slate\u201d approach. This is what you would normally do when using the article for the very first time."},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#creating-a-bucket-without-versioning","title":"Creating a bucket without versioning\ud83d\udd17","text":"<p>Command create-bucket will create a bucket under your chosen name (variable $bucket_name1).</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <p>The output of this command should be empty if everything went well.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#enabling-versioning-on-a-bucket","title":"Enabling versioning on a bucket\ud83d\udd17","text":"<p>To enable versioning on the bucket $bucket_name1, use command put-bucket-versioning:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <p>Note</p> <p>On Amazon Web Services, the presence of parameter MFADelete increases security by requiring two security factors when changing versioning status or removing file version. Here we disable it for simplicity.</p> <p>The output of this command should also be empty.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#uploading-file","title":"Uploading file\ud83d\udd17","text":"<p>Let\u2019s say that we upload a file to the root directory of our container. Let the name of the file be something.txt and let it have the following content: This is version 1.</p> <p></p> <p>We are in the folder which contains that file and we execute command put-object to upload the file to our bucket:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body something.txt \\\n--key something.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body something.txt \\\n--key something.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body something.txt \\\n--key something.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body something.txt \\\n--key something.txt\n</code></pre> <p>In this command, the values of parameters are:</p> --body the name of the file within our local file system --key the location (like file name) under which the file is to be saved on the container. <p>We get output like this:</p> <pre><code>{\n \"ETag\": \"\\\"a4d8980efbd9b71f416595a3d5588b32\\\"\",\n \"VersionId\": \"whrj2pDFrrFq0WLdH0zGzprfkebQykf\"\n}\n</code></pre> <p>This upload created the first version of the file. The ID of this version is whrj2pDFrrFq0WLdH0zGzprfkebQykf (value of key VersionId). You can write down it as you will use it again to access the bucket.</p> <p>The output also provides an ETag key, which is a hash of the object you uploaded.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#s3-paths","title":"S3 paths\ud83d\udd17","text":"<p>The parameter --key from put-object command may also be used in other commands to reference an already uploaded bucket in the container.</p> <p>If used without slashes, as in the example above, the file is in the root directory.</p> <p>If your file is outside of the root directory, value of parameter --key needs to include directory in which it is located. When providing this path, separate directories and files with forward slashes (/). Contrary to the Linux file system, do not add slash to the beginning of the path. If your bucket contains</p> <ul> <li>directory called place1, which contains</li> <li>another directory, place2, which, in turn, contains</li> <li>file called</li> </ul> <pre><code>myfile.txt\n</code></pre> <p>this is how to specify its path for --key parameter:</p> <pre><code>--key place1/place2/myfile.txt\n</code></pre> <p>To practice, you can create a new bucket whose name is stored in variable $bucket2</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name2\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name2\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name2\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name2\n</code></pre> <p>After that, you can create a file named myfile.txt and upload it to above mentioned directory of that bucket:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name2 \\\n--body myfile.txt \\\n--key place1/place2/myfile.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name2 \\\n--body myfile.txt \\\n--key place1/place2/myfile.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name2 \\\n--body myfile.txt \\\n--key place1/place2/myfile.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name2 \\\n--body myfile.txt \\\n--key place1/place2/myfile.txt\n</code></pre>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#uploading-another-version-of-a-file","title":"Uploading another version of a file\ud83d\udd17","text":"<p>Let us now return to $bucket_name1</p> <p>We already have file something.txt in the root directory of the container, in the cloud.</p> <p>Let\u2019s use your local computer to modify it so that it contains string This is version 2.</p> <p></p> <p>Then, we use the same command, put-object, to upload the modified file to the same location on the bucket:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--body something.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--body something.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--body something.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--body something.txt\n</code></pre> <p>The output is similar, but this time it contains a different VersionId: whrj2pDFrrFq0WLdH0zGzprfkebQykf.</p> <pre><code>{\n \"ETag\": \"\\\"ded190b85763d32ce9c09a8aef51f44c\\\"\",\n \"VersionId\": \"t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN\"\n}\n</code></pre> <p>To list all files in that bucket, execute command list-objects :</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-objects \\\n--bucket $bucket_name1 \\\n--endpoint-url https://s3.waw4-1.3Engines.com\n</code></pre> <pre><code>aws s3api list-objects \\\n--bucket $bucket_name1 \\\n--endpoint-url https://s3.waw3-1.3Engines.com\n</code></pre> <pre><code>aws s3api list-objects \\\n--bucket $bucket_name1 \\\n--endpoint-url https://s3.waw3-2.3Engines.com\n</code></pre> <pre><code>aws s3api list-objects \\\n--bucket $bucket_name1 \\\n--endpoint-url https://s3.fra1-2.3Engines.com\n</code></pre> <p>The output will be similar to this:</p> <pre><code>{\n \"Contents\": [\n {\n \"Key\": \"something.txt\",\n \"LastModified\": \"2024-08-23T10:32:30.259Z\",\n \"ETag\": \"\\\"ded190b85763d32ce9c09a8aef51f44c\\\"\",\n \"Size\": 18,\n \"StorageClass\": \"STANDARD\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>Here is what the parameters mean:</p> Key name and/or path of the file. LastModified timestamp of when the file was last modified. Etag the hash of the file. Size size of the file in bytes. StorageClass information regarding the storage class (STANDARD in this example). Owner information about your project - name (parameter DisplayName) and ID. <p>In the example above, the bucket still contains only one file - something.txt. This upload overwrote it with a new version, but the previous version is still present.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#listing-available-versions-of-a-file","title":"Listing available versions of a file\ud83d\udd17","text":""},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#example-1-one-file-two-versions","title":"Example 1: One file, two versions\ud83d\udd17","text":"<p>To list the available versions of files in this bucket, use list-object-versions:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <p>The output could look like this:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"ded190b85763d32ce9c09a8aef51f44c\\\"\",\n \"Size\": 18,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"something.txt\",\n \"VersionId\": \"t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-08-23T10:32:30.259Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"a4d8980efbd9b71f416595a3d5588b32\\\"\",\n \"Size\": 18,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"something.txt\",\n \"VersionId\": \"whrj2pDFrrFq0WLdH0zGzprfkebQykf\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-08-23T10:19:24.943Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>It contains two versions we created previously. Each has its own ID, which is the value of parameter VersionId:</p> <p>Table 5 Key vs. VersionId\ud83d\udd17</p> Key VersionId something.txt t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN something.txt whrj2pDFrrFq0WLdH0zGzprfkebQykf <p>Both of them are tied to the same file called something.txt.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#example-2-multiple-files-multiple-versions","title":"Example 2: Multiple files, multiple versions\ud83d\udd17","text":"<p>Let us now consider an alternative situation in which we have two files, and one of them has two versions.</p> <p>The output of list-object-versions could then look like this:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"adda90afa69e725c2f551e0722014726\\\"\",\n \"Size\": 575,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"something1.txt\",\n \"VersionId\": \"kv7QRQsfHhEe-T6c9g-v3uIPoyX6FTs\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-08-26T16:10:49.979Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890qwertyuiopasdfghjklzxc\"\n }\n },\n {\n \"ETag\": \"\\\"947073995b23baa9a565cf21bf56a2ba\\\"\",\n \"Size\": 6,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"something2.txt\",\n \"VersionId\": \"no1KrA3MbEtjIk1CnN5U.rTtFKFXSpj\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-08-26T16:12:29.961Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890qwertyuiopasdfghjklzxc\"\n }\n },\n {\n \"ETag\": \"\\\"7d7b28bafa5222d9083fa4ea7e97cff6\\\"\",\n \"Size\": 106,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"something2.txt\",\n \"VersionId\": \"gRYReY1SpVI3rS-Qp0NYDPofoAfGfc7\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-08-26T16:11:21.584Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890qwertyuiopasdfghjklzxc\"\n }\n }\n ],\n \"RequestCharged\": null\n }\n</code></pre> <p>Table 6 Key vs. VersionId\ud83d\udd17</p> Key VersionId something1.txt kv7QRQsfHhEe-T6c9g-v3uIPoyX6FTs something2.txt no1KrA3MbEtjIk1CnN5U.rTtFKFXSpj something2.txt gRYReY1SpVI3rS-Qp0NYDPofoAfGfc7 <p>File something1.txt has one version, while file something2.txt has two versions..</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#downloading-a-chosen-version-of-the-file","title":"Downloading a chosen version of the file\ud83d\udd17","text":"<p>Let us return to $bucket_name1.</p> <p>To download version of file stored on that bucket called something.txt which has VersionId of whrj2pDFrrFq0WLdH0zGzprfkebQykf, we execute get-object command:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api get-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id whrj2pDFrrFq0WLdH0zGzprfkebQykf \\\n./something.txt\n</code></pre> <pre><code>aws s3api get-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id whrj2pDFrrFq0WLdH0zGzprfkebQykf \\\n./something.txt\n</code></pre> <pre><code>aws s3api get-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id whrj2pDFrrFq0WLdH0zGzprfkebQykf \\\n./something.txt\n</code></pre> <pre><code>aws s3api get-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id whrj2pDFrrFq0WLdH0zGzprfkebQykf \\\n./something.txt\n</code></pre> <p>In this command:</p> --key is the name and/or location of the file on your bucket --version-id is the ID of the chosen version of the file ./something.txt is the name and/or location on your local file system to which you want to download the file. Iff there is already a file there, it will be overwritten. <p>The file should be downloaded and we should get output like this:</p> <pre><code>{\n \"AcceptRanges\": \"bytes\",\n \"LastModified\": \"Fri, 23 Aug 2024 10:19:24 GMT\",\n \"ContentLength\": 18,\n \"ETag\": \"\\\"a4d8980efbd9b71f416595a3d5588b32\\\"\",\n \"VersionId\": \"whrj2pDFrrFq0WLdH0zGzprfkebQykf\",\n \"ContentType\": \"binary/octet-stream\",\n \"Metadata\": {}\n}\n</code></pre> <p>The file should be in our current working directory:</p> <p></p> <p>Displaying its contents with the cat command reveals that it is indeed the first version of that file:</p> <p></p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#deleting-objects-on-version-enabled-buckets","title":"Deleting objects on version-enabled buckets\ud83d\udd17","text":"<p>AWS CLI includes command delete-object which is used to delete files stored on buckets. It behaves differently depending on the circumstances:</p> <ol> <li>On regular buckets, it will simply delete the specified file.</li> <li>On version-enabled buckets, there are two possibilities:</li> </ol> Version to be deleted is not specified The command will not delete the specified file but will, instead, place a deletion marker into the file. The version to be deleted is specified The specified version will be deleted permanently. <p>Here are the examples for version-enabled buckets.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#setting-up-a-deletion-marker","title":"Setting up a deletion marker\ud83d\udd17","text":"<p>The command to delete files from buckets is delete-object.</p> <p>Let us try to delete file named something.txt from bucket $bucket_name1 and let us NOT specify the version:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt\n</code></pre> <p>This command should return output like this:</p> <pre><code>{\n \"DeleteMarker\": true,\n \"VersionId\": \"A0hVZCX0z6yMrlmoYymeaGPT4nzInS2\"\n}\n</code></pre> <p>The marker we just placed causes the file to be invisible when listing files normally. VersionId is useful if you, say, want to remove that marker and restore the file.</p> <p>To fully see the effect of delete-object command, we list objects again using list-objects:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-objects \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-objects \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-objects \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-objects \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <p>This time, the output does not list any files - the file something.txt became invisible.</p> <p>If there are no files to list, you might get the following output:</p> <pre><code>{\n \"RequestCharged\": null\n}\n</code></pre> <p>or your output might be empty.</p> <p>If we list versions of files in our bucket using list-object-versions, for instance:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <p>we will see that the previous versions are still there:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"ded190b85763d32ce9c09a8aef51f44c\\\"\",\n \"Size\": 18,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"something.txt\",\n \"VersionId\": \"t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-08-23T10:32:30.259Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"a4d8980efbd9b71f416595a3d5588b32\\\"\",\n \"Size\": 18,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"something.txt\",\n \"VersionId\": \"whrj2pDFrrFq0WLdH0zGzprfkebQykf\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-08-23T10:19:24.943Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"DeleteMarkers\": [\n {\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n },\n \"Key\": \"something.txt\",\n \"VersionId\": \"A0hVZCX0z6yMrlmoYymeaGPT4nzInS2\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-08-23T11:28:48.128Z\"\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>Apart from the previously uploaded versions, a delete marker (key DeleteMarkers) with version ID of A0hVZCX0z6yMrlmoYymeaGPT4nzInS2 can also be found.</p> <p>Note</p> <p>If your bucket contains additional files, they too will be listed here.</p> <p>Within the Horizon dashboard, the file is also \u201cinvisible\u201d:</p> <p></p> <p>Even though the file cannot be seen, the size of the bucket is still displayed correctly - 36 bytes. Each stored version of each file adds to the total size.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#removing-the-deletion-marker","title":"Removing the deletion marker\ud83d\udd17","text":"<p>To restore the visibility of a file, delete its deletion marker by issuing command delete-object and specify the VersionID of the deletion marker:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id A0hVZCX0z6yMrlmoYymeaGPT4nzInS2\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id A0hVZCX0z6yMrlmoYymeaGPT4nzInS2\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id A0hVZCX0z6yMrlmoYymeaGPT4nzInS2\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id A0hVZCX0z6yMrlmoYymeaGPT4nzInS2\n</code></pre> <p>In this command:</p> <ul> <li>something.txt is the name of the file</li> <li>A0hVZCX0z6yMrlmoYymeaGPT4nzInS2 is the VersionID of the deletion marker which was obtained while following the previous section of this article.</li> </ul> <p>Warning</p> <p>Make sure to enter the correct VersionID to prevent accidental deletion of important data!</p> <p>We get the following output:</p> <pre><code>{\n \"DeleteMarker\": true,\n \"VersionId\": \"A0hVZCX0z6yMrlmoYymeaGPT4nzInS2\"\n}\n</code></pre> <p>Once again, let us list object versions using command list-object-versions:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <p>The delete marker no longer exists:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"ded190b85763d32ce9c09a8aef51f44c\\\"\",\n \"Size\": 18,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"something.txt\",\n \"VersionId\": \"t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-08-23T10:32:30.259Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"a4d8980efbd9b71f416595a3d5588b32\\\"\",\n \"Size\": 18,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"something.txt\",\n \"VersionId\": \"whrj2pDFrrFq0WLdH0zGzprfkebQykf\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-08-23T10:19:24.943Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>And if we list files with list-objects:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-objects \\\n--bucket $bucket_name1 \\\n--endpoint-url https://s3.waw4-1.3Engines.com\n</code></pre> <pre><code>aws s3api list-objects \\\n--bucket $bucket_name1 \\\n--endpoint-url https://s3.waw3-1.3Engines.com\n</code></pre> <pre><code>aws s3api list-objects \\\n--bucket $bucket_name1 \\\n--endpoint-url https://s3.waw3-2.3Engines.com\n</code></pre> <pre><code>aws s3api list-objects \\\n--bucket $bucket_name1 \\\n--endpoint-url https://s3.fra1-2.3Engines.com\n</code></pre> <p>the output once again shows one file - something.txt:</p> <pre><code>{\n \"Contents\": [\n {\n \"Key\": \"something.txt\",\n \"LastModified\": \"2024-08-23T10:32:30.259Z\",\n \"ETag\": \"\\\"ded190b85763d32ce9c09a8aef51f44c\\\"\",\n \"Size\": 18,\n \"StorageClass\": \"STANDARD\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>The file should now also be visible in Horizon again:</p> <p></p> <p>That on this screenshot, the visible file has size 18 bytes, whereas the total size of this bucket is 36 bytes. This is because the total size includes both stored versions of the file.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#permanently-removing-files-from-version-enabled-bucket","title":"Permanently removing files from version-enabled bucket\ud83d\udd17","text":"<p>You can delete versions of file stored in the bucket just like you can delete the previously mentioned delete marker.</p> <p>The two versions of file something.txt, t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN and whrj2pDFrrFq0WLdH0zGzprfkebQykf still exist in bucket $bucket_name1.</p> <p>To delete the first version permanently, we use command delete-object similar to the one used to remove the deletion marker. The difference is that here we specify the VersionID which we want to remove.</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN\n</code></pre> <p>We should get output like this:</p> <pre><code>{\n \"VersionId\": \"t22ZzEq6kt5ILKFfLZgoeSzW.I9HVtN\"\n}\n</code></pre> <p>When we list versions of files stored on bucket with list-object-versions:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <p>the output will show us only one version of one file:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"a4d8980efbd9b71f416595a3d5588b32\\\"\",\n \"Size\": 18,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"something.txt\",\n \"VersionId\": \"whrj2pDFrrFq0WLdH0zGzprfkebQykf\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-08-23T10:19:24.943Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>In the Horizon dashboard, the total size of the bucket was reduced to 18 bytes:</p> <p></p> <p>If we delete the last version using command delete-object,</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id whrj2pDFrrFq0WLdH0zGzprfkebQykf\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id whrj2pDFrrFq0WLdH0zGzprfkebQykf\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id whrj2pDFrrFq0WLdH0zGzprfkebQykf\n</code></pre> <pre><code>aws s3api delete-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--key something.txt \\\n--version-id whrj2pDFrrFq0WLdH0zGzprfkebQykf\n</code></pre> <p>the last file from Horizon dashboard should disappear and the size of the bucket should be reduced to zero bytes:</p> <p></p> <p>If we now execute the list-object-versions command:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1\n</code></pre> <p>we will see that there are no files or versions there:</p> <pre><code>{\n \"RequestCharged\": null\n}\n</code></pre>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#using-lifecycle-policy-to-configure-automatic-deletion-of-previous-versions-of-files","title":"Using lifecycle policy to configure automatic deletion of previous versions of files\ud83d\udd17","text":"<p>\u201cNoncurrent version\u201d is any version of a file which is not the latest. In this section, we will cover how to configure automatic deletion of these versions after a specified amount of days.</p> <p>For this purpose, we will use function called \u201clifecycle policy\u201d.</p> <p>This example covers configuring automatic removal of noncurrent versions of a file 1 day after a newer version of the same file has been uploaded.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#preparing-the-testing-environment","title":"Preparing the testing environment\ud83d\udd17","text":"<p>For testing, create bucket whose name is stored in variable $bucket_name3 and enable versioning:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name3\n\naws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name3 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name3\n\naws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name3 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://{{ s3_login }} \\\n--bucket $bucket_name3\n\naws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name3 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name3\n\naws s3api put-bucket-versioning \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name3 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <p>For the sake of this article, let us suppose that we are in a folder which contains the following two files:</p> <ul> <li>mycode.py</li> <li>announcement.md</li> </ul> <p>The actual content of these files is not important here.</p> <p>We upload these files to $bucket_name3 using the standard put-object command:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body mycode.py \\\n--key mycode.py\n\naws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body announcement.md \\\n--key announcement.md\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body mycode.py \\\n--key mycode.py\n\naws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body announcement.md \\\n--key announcement.md\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body mycode.py \\\n--key mycode.py\n\naws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body announcement.md \\\n--key announcement.md\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body mycode.py \\\n--key mycode.py\n\naws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body announcement.md \\\n--key announcement.md\n</code></pre> <p>To see these files after upload, execute list-object-versions on the bucket:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <p>We get the following output:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"d185982da39fb33854a5b49c8e416e07\\\"\",\n \"Size\": 34,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"announcement.md\",\n \"VersionId\": \"r714CQ6MLAo4l300Fv9iBCqfNpESPpN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-10-04T14:51:26.015Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"6cf02e36dd1dc8b58ea77ba4a94291f2\\\"\",\n \"Size\": 21,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"mycode.py\",\n \"VersionId\": \".qBE6Dx91dxnU7aYOzmBMM1qRg3QwAx\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-10-04T14:51:41.115Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>To test automatic deleting of previous versions, we modify file named mycode.py on our local computer and upload it one more time using put-object:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name3 \\\n--body mycode.py \\\n--key mycode.py\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name3 \\\n--body mycode.py \\\n--key mycode.py\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name3 \\\n--body mycode.py \\\n--key mycode.py\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name3 \\\n--body mycode.py \\\n--key mycode.py\n</code></pre> <p>Executing list-object-versions again:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <p>confirms that one of the files has two versions while the other only has one version:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"d185982da39fb33854a5b49c8e416e07\\\"\",\n \"Size\": 34,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"announcement.md\",\n \"VersionId\": \"r714CQ6MLAo4l300Fv9iBCqfNpESPpN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-10-04T14:51:26.015Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"3a474b21ab418d007ad677262dfed5b6\\\"\",\n \"Size\": 39,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"mycode.py\",\n \"VersionId\": \"tYJ6IazGryIWjv4iwSM1mLTW4-AnhMN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-10-04T14:55:07.223Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"6cf02e36dd1dc8b58ea77ba4a94291f2\\\"\",\n \"Size\": 21,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"mycode.py\",\n \"VersionId\": \".qBE6Dx91dxnU7aYOzmBMM1qRg3QwAx\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-10-04T14:51:41.115Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>Contrary to other versions of files stored here, the first version of file mycode.py has false under the key IsLatest. This shows that this is not the latest version of that file.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#setting-up-automatic-removal-of-previous-versions","title":"Setting up automatic removal of previous versions\ud83d\udd17","text":"<p>The lifecycle policy is written in JSON. Create file named noncurrent-policy.json in your current working directory (doesn\u2019t have to be the location of the file which contains your login credentials) and enter the following code into it:</p> <pre><code>{\n \"Rules\": [\n {\n \"ID\": \"NoncurrentVersionExpiration\",\n \"Filter\": {\n \"Prefix\": \"\"\n },\n \"Status\": \"Enabled\",\n \"NoncurrentVersionExpiration\": {\n \"NoncurrentDays\": 1\n }\n }\n ]\n}\n</code></pre> <p>Replace 1 with the number of days after which noncurrent versions are to be deleted.</p> <p>In this example, we will apply this policy to bucket $bucket_name1. The command is put-bucket-lifecycle-configuration:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name3 \\\n--lifecycle-configuration file://noncurrent-policy.json\n</code></pre> <pre><code>aws s3api put-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name3 \\\n--lifecycle-configuration file://noncurrent-policy.json\n</code></pre> <pre><code>aws s3api put-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name3 \\\n--lifecycle-configuration file://noncurrent-policy.json\n</code></pre> <pre><code>aws s3api put-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name3 \\\n--lifecycle-configuration file://noncurrent-policy.json\n</code></pre> <p>The output should be empty.</p> <p>To verify that the policy was applied, execute the get-bucket-lifecycle-configuration command:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api get-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api get-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api get-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api get-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <p>In the output, you should see the policy which you applied:</p> <pre><code>{\n \"Rules\": [\n {\n \"ID\": \"NoncurrentVersionExpiration\",\n \"Filter\": {\n \"Prefix\": \"\"\n },\n \"Status\": \"Enabled\",\n \"NoncurrentVersionExpiration\": {\n \"NoncurrentDays\": 1\n }\n }\n ]\n}\n</code></pre> <p>Versions of files which are not the latest should now be removed after 1 day.</p> <p>In this example, logging in after one day and executing list-object-versions again:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <p>reveals that the version of file mycode.py which is not the latest was deleted:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"d185982da39fb33854a5b49c8e416e07\\\"\",\n \"Size\": 34,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"announcement.md\",\n \"VersionId\": \"r714CQ6MLAo4l300Fv9iBCqfNpESPpN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-10-04T14:51:26.015Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"3a474b21ab418d007ad677262dfed5b6\\\"\",\n \"Size\": 39,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"mycode.py\",\n \"VersionId\": \"tYJ6IazGryIWjv4iwSM1mLTW4-AnhMN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-10-04T14:55:07.223Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#deleting-lifecycle-policy","title":"Deleting lifecycle policy\ud83d\udd17","text":"<p>Command delete-bucket-lifecycle deletes bucket lifecycle policy. This is how to do it on bucket $bucket_name3.</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api delete-bucket-lifecycle \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api delete-bucket-lifecycle \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api delete-bucket-lifecycle \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api delete-bucket-lifecycle \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <p>The output of this command should be empty.</p> <p>To verify, we once again check the current lifecycle configuration:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api get-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api get-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api get-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <pre><code>aws s3api get-bucket-lifecycle-configuration \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name3\n</code></pre> <p>This command should return either an empty output or:</p> <pre><code>argument of type 'NoneType' is not iterable\n</code></pre> <p>The policy should now no longer apply.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#suspending-versioning","title":"Suspending versioning\ud83d\udd17","text":"<p>If you no longer want to store multiple versions of files, you can suspend the versioning.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#bucket-on-which-versioning-has-never-been-enabled","title":"Bucket on which versioning has never been enabled\ud83d\udd17","text":"<p>To better understand how it works, let us start with a bucket in which versioning has never been enabled in the first place.</p> <p>On such a bucket, every file will only have one version which has one and the same ID, namely, null.</p> <p>If you upload another file under the same name, its VersionID will also be null, and will replace the previously uploaded file.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#example","title":"Example\ud83d\udd17","text":"<p>For this example, we will create bucket $bucket_name4 on which versioning has never been enabled.</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <p>Buckets can, of course, contain files of various type. For the sake of this example, suppose that the bucket contains the following three files of various types:</p> <p>Table 7 File vs. the editor\ud83d\udd17</p> File Editor document.odt LibreOffice screenshot1.png GIMP, Krita etc. script.sh nano, vim etc. <p>The actual content of these files is not important here. You can use the editors from this table to create the files and then upload them with put-object:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body document.odt \\\n--key document.odt\n\naws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body screenshot1.png \\\n--key screenshot1.png\n\naws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body script.sh \\\n--key script.sh\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body document.odt \\\n--key document.odt\n\naws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body screenshot1.png \\\n--key screenshot1.png\n\naws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name1 \\\n--body script.sh \\\n--key script.sh\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body document.odt \\\n--key document.odt\n\naws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body screenshot1.png \\\n--key screenshot1.png\n\naws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body script.sh \\\n--key script.sh\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body document.odt \\\n--key document.odt\n\naws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body screenshot1.png \\\n--key screenshot1.png\n\naws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name1 \\\n--body script.sh \\\n--key script.sh\n</code></pre> <p>First, let\u2019s try to execute the previously mentioned list-object-versions command on this bucket:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <p>Example output:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"5064a9c6200fd7dae7c25f2ed01a6f8f\\\"\",\n \"Size\": 9639,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"document.odt\",\n \"VersionId\": \"null\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:19:02.425Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"e3fedcd58235e90e7a676a84cd6c7ee6\\\"\",\n \"Size\": 174203,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"screenshot1.png\",\n \"VersionId\": \"null\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:17:17.085Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"5600fdc5aa752cba9895d985a9cf709e\\\"\",\n \"Size\": 36,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"script.sh\",\n \"VersionId\": \"null\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:17:47.206Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>All of these files have only one version and it has null as its ID.</p> <p>Let\u2019s say that we locally modify one of these three files (here we are using script.sh) and upload the modified version under the same name and key.</p> <p>For this purpose, from within the folder which contains our file, we execute the following put-object command:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name4 \\\n--body script.sh \\\n--key script.sh\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name4 \\\n--body script.sh \\\n--key script.sh\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name4 \\\n--body script.sh \\\n--key script.sh\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name4 \\\n--body script.sh \\\n--key script.sh\n</code></pre> <p>For confirmation, we should get output containing the ETag:</p> <pre><code>{\n \"ETag\": \"\\\"b6b82cb2376934bcf6877705bae6ac58\\\"\"\n}\n</code></pre> <p>If we list object versions again with list-object-versions:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name4\n</code></pre> <p>we should get the output like this:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"5064a9c6200fd7dae7c25f2ed01a6f8f\\\"\",\n \"Size\": 9639,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"document.odt\",\n \"VersionId\": \"null\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:19:02.425Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"e3fedcd58235e90e7a676a84cd6c7ee6\\\"\",\n \"Size\": 174203,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"screenshot1.png\",\n \"VersionId\": \"null\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:17:17.085Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"b6b82cb2376934bcf6877705bae6ac58\\\"\",\n \"Size\": 60,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"script.sh\",\n \"VersionId\": \"null\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-10-02T10:16:11.589Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>Once again, there are three files, each with exactly one version stored. The file script.sh was overwritten during our upload - its parameters Size, ETag and LastModified (timestamp of last modification) have changed.</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#suspending-of-versioning","title":"Suspending of versioning\ud83d\udd17","text":"<p>When you suspend the versioning, your bucket will start behaving similarly to a bucket on which versioning has never been enabled. All files uploaded from that moment on will have null as their VersionId.</p> <p>Let\u2019s say that after suspending of versioning, you upload a file to the same key (name and location within the bucket) as a previously existing file. What happens next depends on whether the bucket already contains a version of that file with VersionID of null:</p> <ul> <li>If version which has null as its VersionId does not exist, the version you are uploading will become a new version of that file.</li> <li>If version which has null as its VersionId does exist, the version you are uploading will overwrite the previous version of that file with VersionId of null.</li> </ul> <p>Either way, the newly uploaded version will have VersionId of null.</p> <p>This overwrite will also happen if version which has null as its VersionId is the only remaining version of the file.</p> <p>Suspending of versioning by itself will not, however, influence previously saved versions which do not have VersionId of null. You can delete them manually if you want to.</p> <p>In order to illustrate suspending of versioning, we will create a new bucket $bucket_name5. First create this bucket:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <pre><code>aws s3api create-bucket \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <p>and then enable versioning on it:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name5 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name5 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name5 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name5 \\\n--versioning-configuration MFADelete=Disabled,Status=Enabled\n</code></pre> <p>Upload a few files to this bucket with put-object command. Make sure that at least one of them has multiple versions.</p> <p>In this example, our bucket contains the following files:</p> <p>Table 8 Key vs. VersionId\ud83d\udd17</p> Key VersionId file1.txt CTv9FT1Wp9pxDZdlZXx2cJ5C2juPNA6 file1.txt eaJNZLZTqtPAq9l09Nrm-CN-UAVtFHQ file2.txt HVRcuAOQ.gpqiU50mJkdAj4bAvgfCFN <p>We can list all these versions using list-object-versions:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <p>The output:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"1f5f1ebe10ac3457ca87427e1772d71f\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file1.txt\",\n \"VersionId\": \"CTv9FT1Wp9pxDZdlZXx2cJ5C2juPNA6\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:28:47.501Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"174b29d6d688c2b34f6c1bb7361a8b7e\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file1.txt\",\n \"VersionId\": \"eaJNZLZTqtPAq9l09Nrm-CN-UAVtFHQ\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-09-16T11:28:10.006Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"174b29d6d688c2b34f6c1bb7361a8b7e\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file2.txt\",\n \"VersionId\": \"HVRcuAOQ.gpqiU50mJkdAj4bAvgfCFN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:28:20.830Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>To suspend versioning on this bucket, execute put-bucket-versioning:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name5 \\\n--versioning-configuration MFADelete=Disabled,Status=Suspended\n</code></pre> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name5 \\\n--versioning-configuration MFADelete=Disabled,Status=Suspended\n</code></pre> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name5 \\\n--versioning-configuration MFADelete=Disabled,Status=Suspended\n</code></pre> <pre><code>aws s3api put-bucket-versioning \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name5 \\\n--versioning-configuration MFADelete=Disabled,Status=Suspended\n</code></pre> <p>The output of this command should be empty.</p> <p>We list versions of files again with list-object-versions:</p> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <pre><code>aws s3api list-object-versions \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name5\n</code></pre> <p>The output shows us that the previous versions of files have not been removed:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"1f5f1ebe10ac3457ca87427e1772d71f\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file1.txt\",\n \"VersionId\": \"CTv9FT1Wp9pxDZdlZXx2cJ5C2juPNA6\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-09-16T11:28:47.501Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"174b29d6d688c2b34f6c1bb7361a8b7e\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file1.txt\",\n \"VersionId\": \"eaJNZLZTqtPAq9l09Nrm-CN-UAVtFHQ\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-09-16T11:28:10.006Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"174b29d6d688c2b34f6c1bb7361a8b7e\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file2.txt\",\n \"VersionId\": \"HVRcuAOQ.gpqiU50mJkdAj4bAvgfCFN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:28:20.830Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>We then</p> <ul> <li>modify the contents of previously uploaded file file1.txt on our local computer and</li> <li>upload that file again, with put-object:</li> </ul> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name5 \\\n--body file1.txt \\\n--key file1.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name5 \\\n--body file1.txt \\\n--key file1.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name5 \\\n--body file1.txt \\\n--key file1.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name5 \\\n--body file1.txt \\\n--key file1.txt\n</code></pre> <p>After successful upload, we again list all versions (command list-versions) of files in our bucket:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"4d3828bb564834c45a522e3492cbdf4a\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file1.txt\",\n \"VersionId\": \"null\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:31:01.968Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"1f5f1ebe10ac3457ca87427e1772d71f\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file1.txt\",\n \"VersionId\": \"CTv9FT1Wp9pxDZdlZXx2cJ5C2juPNA6\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-09-16T11:28:47.501Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"174b29d6d688c2b34f6c1bb7361a8b7e\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file1.txt\",\n \"VersionId\": \"eaJNZLZTqtPAq9l09Nrm-CN-UAVtFHQ\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-09-16T11:28:10.006Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"174b29d6d688c2b34f6c1bb7361a8b7e\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file2.txt\",\n \"VersionId\": \"HVRcuAOQ.gpqiU50mJkdAj4bAvgfCFN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:28:20.830Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>The previous versions of this file were not replaced, but a new version of file file1.txt (which has VersionId of null), was uploaded.</p> <p>From now on, each uploaded file will be uploaded with VersionId of null. If this version of that file already exists, it will be replaced.</p> <p>To illustrate that, we</p> <ul> <li>once again modify the file file1.txt on our local computer and</li> <li>upload this modified version again, using put-object:</li> </ul> <p>WAW4-1WAW3-1WAW3-2FRA1-2</p> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw4-1.3Engines.com \\\n--bucket $bucket_name5 \\\n--body file1.txt \\\n--key file1.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-1.3Engines.com \\\n--bucket $bucket_name5 \\\n--body file1.txt \\\n--key file1.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.waw3-2.3Engines.com \\\n--bucket $bucket_name5 \\\n--body file1.txt \\\n--key file1.txt\n</code></pre> <pre><code>aws s3api put-object \\\n--endpoint-url https://s3.fra1-2.3Engines.com \\\n--bucket $bucket_name5 \\\n--body file1.txt \\\n--key file1.txt\n</code></pre> <p>After this upload, we list versions one more time and get the following output:</p> <pre><code>{\n \"Versions\": [\n {\n \"ETag\": \"\\\"c96e9d7d1e4655b15493cc31ab7cfc24\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file1.txt\",\n \"VersionId\": \"null\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:34:25.528Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"1f5f1ebe10ac3457ca87427e1772d71f\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file1.txt\",\n \"VersionId\": \"CTv9FT1Wp9pxDZdlZXx2cJ5C2juPNA6\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-09-16T11:28:47.501Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"174b29d6d688c2b34f6c1bb7361a8b7e\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file1.txt\",\n \"VersionId\": \"eaJNZLZTqtPAq9l09Nrm-CN-UAVtFHQ\",\n \"IsLatest\": false,\n \"LastModified\": \"2024-09-16T11:28:10.006Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n },\n {\n \"ETag\": \"\\\"174b29d6d688c2b34f6c1bb7361a8b7e\\\"\",\n \"Size\": 10,\n \"StorageClass\": \"STANDARD\",\n \"Key\": \"file2.txt\",\n \"VersionId\": \"HVRcuAOQ.gpqiU50mJkdAj4bAvgfCFN\",\n \"IsLatest\": true,\n \"LastModified\": \"2024-09-16T11:28:20.830Z\",\n \"Owner\": {\n \"DisplayName\": \"this-project\",\n \"ID\": \"1234567890abcdefghijklmnopqrstuv\"\n }\n }\n ],\n \"RequestCharged\": null\n}\n</code></pre> <p>Once again, there is only one version which has null as its ID - the upload overwrote the previous version. The date of last modification (LastModified) has changed. Its previous value was 2024-09-16T11:31:01.968Z and now it is 2024-09-16T11:34:25.528Z</p>"},{"location":"s3/S3-bucket-object-versioning-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>AWS CLI is not the only available way of interacting with object storage. Other ways include:</p> Horizon dashboard How to use Object Storage on 3Engines Cloud s3fs How to Mount Object Storage Container as a File System in Linux Using s3fs on 3Engines Cloud Rclone How to mount object storage container from 3Engines Cloud as file system on local Windows computer s3cmd How to access object storage from 3Engines Cloud using s3cmd"},{"location":"s3/Server-Side-Encryption-with-Customer-Managed-Keys-SSE-C-on-3Engines-Cloud.html.html","title":"Server-Side Encryption with Customer-Managed Keys (SSE-C) on 3Engines Cloud\ud83d\udd17","text":""},{"location":"s3/Server-Side-Encryption-with-Customer-Managed-Keys-SSE-C-on-3Engines-Cloud.html.html#introduction","title":"Introduction\ud83d\udd17","text":"<p>This guide explains how to encrypt your objects server-side with SSE-C.</p> <p>Server-side encryption is a way to protecting data at rest. SSE encrypts only the object data. Using server-side encryption with customer-provided encryption keys (SSE-C) allows to set your own keys for encryption. Server manages the encryption as it writes to disks and decryption when you access your objects. The only thing that you must manage is to provide your own encryption keys.</p> <p>SSE-C is working as on the moment of uploading an object. Server uses the encryption key you provide to apply AES-256 encryption to data and removes the encryption key from memory. To access the data again you must provide the same encryption key on the request. Server verifies whether provided key matches and then decrypts the object before returning the object data to you.</p>"},{"location":"s3/Server-Side-Encryption-with-Customer-Managed-Keys-SSE-C-on-3Engines-Cloud.html.html#requirements","title":"Requirements\ud83d\udd17","text":"<ul> <li>A bucket (How to use Object Storage on 3Engines Cloud)</li> <li> <p>A user with the required access rights on the bucket</p> </li> <li> <p>EC2 credentials (How to generate and manage EC2 credentials on 3Engines Cloud)</p> </li> <li>Have installed and configured aws</li> </ul> <p>If you have not used aws before:</p> <pre><code>$ sudo apt install awscli\n</code></pre> <p>Then:</p> <pre><code>$ aws configure\n\nAWS Access Key ID [None]: &lt;your EC2 Access Key&gt;\nAWS Secret Access Key [None]: &lt;your EC2 Secret Key&gt;\nDefault region name [None]: &lt;enter&gt;\nDefault output format [None]: &lt;enter&gt;\n</code></pre> <p>SSE-C at a glance</p> <ul> <li>Only HTTPS S3 rejects any requests made over HTTP using SSE-C.</li> <li>If you send request erroneously using HTTP, for security you should discard the key and rotate as appropriate.</li> <li>The ETag in the response is not the MD5 of the object data.</li> <li>You are responsible for managing encryption keys and to which object they were used.</li> <li>If bucket is versioning-enabled, each object version can have its own encryption key.</li> </ul> <p>Attention</p> <p>If you lose encryption key it means the object is also lost. Our servers do not store encryption keys, so it is not possible to access the data again without them.</p>"},{"location":"s3/Server-Side-Encryption-with-Customer-Managed-Keys-SSE-C-on-3Engines-Cloud.html.html#rest-api","title":"REST API\ud83d\udd17","text":"<p>To encrypt or decrypt objects in SSE-C mode the following headers are required:</p> Header Type Description x-amz-server-side\u200b-encryption\u200b-customer-algorithm string Encryption algorithm. Must be set to AES256 x-amz-server-side\u200b-encryption\u200b-customer-key string 256-bit base64-encoded encryption key used in the server-side encryption process x-amz-server-side\u200b-encryption\u200b-customer-key-MD5 string base64-encoded 128-bit MD5 digest of the encryption key according to RFC 1321. It is used to ensure that the encryption has not been corrupted during transport and encoding process. <p>Note</p> <p>MD5 digest of the key before base64 encoding.</p> <p>Headers apply to the following API operations:</p> <ul> <li>PutObject</li> <li>PostObject</li> <li>CopyObject (to target objects)</li> <li>HeadObject</li> <li>GetObject</li> <li>InitiateMultipartUpload</li> <li>UploadPart</li> <li>UploadPart-Copy (to target parts)</li> </ul>"},{"location":"s3/Server-Side-Encryption-with-Customer-Managed-Keys-SSE-C-on-3Engines-Cloud.html.html#example-no-1-generate-header-values","title":"Example No 1 Generate header values\ud83d\udd17","text":"<pre><code>secret=\"32bytesOfTotallyRandomCharacters\"\nkey=$(echo -n $secret | base64)\nkeymd5=$(echo -n $secret | openssl dgst -md5 -binary | base64)\n</code></pre> <p>OR</p> <pre><code>openssl rand 32 &gt; sse-c.key\nkey=$(cat sse-c.key | base64)\nkeymd5=$(cat sse-c.key | openssl dgst -md5 -binary | base64)\n</code></pre>"},{"location":"s3/Server-Side-Encryption-with-Customer-Managed-Keys-SSE-C-on-3Engines-Cloud.html.html#example-no-2-aws-cli-s3api","title":"Example No 2 aws-cli (s3api)\ud83d\udd17","text":"<p>Upload an object with SSE-C encryption enabled</p> <pre><code>aws s3api put-object \\\n --bucket bucket-name --key object-name \\\n --body contents.txt \\\n --sse-customer-algorithm AES256 \\\n --sse-customer-key $key \\\n --sse-customer-key-md5 $keymd5 \\\n --endpoint-url https://s3.waw3-1.3Engines.com\n</code></pre>"},{"location":"s3/Server-Side-Encryption-with-Customer-Managed-Keys-SSE-C-on-3Engines-Cloud.html.html#example-no-3-aws-cli-s3","title":"Example No 3 aws-cli (s3)\ud83d\udd17","text":"<pre><code>aws s3 cp file.txt s3://bucket-name/ \\\n --sse-c-key $secret \\\n --sse-c AES256 \\\n --endpoint https://s3.waw3-1.3Engines.com\n</code></pre>"},{"location":"s3/Server-Side-Encryption-with-Customer-Managed-Keys-SSE-C-on-3Engines-Cloud.html.html#example-no-4-aws-cli-s3-blob","title":"Example No 4 aws-cli (s3 blob)\ud83d\udd17","text":"<pre><code>aws s3 cp file.txt s3://bucket/ \\\n--sse-c-key fileb://sse-c.key \\\n--sse-c AES256 \\\n--endpoint https://s3.waw3-1.3Engines.com\n</code></pre> <p>Note</p> <p>At the moment s3cmd does not support SSE-C encryption.</p>"},{"location":"s3/Server-Side-Encryption-with-Customer-Managed-Keys-SSE-C-on-3Engines-Cloud.html.html#downloading-the-encrypted-object","title":"Downloading the encrypted object\ud83d\udd17","text":"<pre><code>aws s3api get-object &lt;file_name&gt; --bucket &lt;bucket_name&gt; \\\n --key &lt;object_key&gt; \\\n --sse-customer-key $secret \\\n --sse-customer-algorithm AES256 \\\n --endpoint https://s3.waw3-1.3Engines.com\n</code></pre> <p>or</p> <pre><code>aws s3api get-object &lt;file_name&gt; --bucket &lt;bucket_name&gt; \\\n --key &lt;object_key&gt; \\\n --sse-customer-key fileb://&lt;key_name&gt; \\\n --sse-customer-algorithm AES256 \\\n --endpoint https://s3.waw3-1.3Engines.com\n</code></pre>"},{"location":"s3/s3.html.html","title":"S3 Storage","text":""},{"location":"s3/s3.html.html#available-documentation","title":"Available Documentation","text":"<ul> <li>How to Delete Large S3 Bucket on 3Engines Cloud</li> <li>How to Mount Object Storage Container as a File System in Linux Using s3fs on 3Engines Cloud</li> <li>Bucket sharing using s3 bucket policy on 3Engines Cloud</li> <li>How to use Object Storage on 3Engines Cloud</li> <li>How to access private object storage using S3cmd or boto3 on 3Engines Cloud</li> <li>How to Install Boto3 in Windows on 3Engines Cloud</li> <li>Server-Side Encryption with Customer-Managed Keys (SSE-C) on 3Engines Cloud</li> <li>How to mount object storage container from 3Engines Cloud as file system on local Windows computer</li> <li>How to install s3cmd on Linux on 3Engines Cloud</li> <li>How to access object storage from 3Engines Cloud using boto3</li> <li>How to access object storage from 3Engines Cloud using s3cmd</li> <li>Configuration files for s3cmd command on 3Engines Cloud</li> <li>S3 bucket object versioning on 3Engines Cloud</li> </ul>"},{"location":"windows/Can-I-change-my-password-through-RDP-on-3Engines-Cloud.html.html","title":"Can I change my password through RDP on 3Engines Cloud?\ud83d\udd17","text":"<p>In short: No, this is not possible. You have to be logged in when you want to change your password. Security measures requiring you to change your password on first login are not working with RDP and have to be disabled on administrative level. This article will show you how to create and configure a new account which can access the VM using RDP without the need to immediately change the password.</p>"},{"location":"windows/Can-I-change-my-password-through-RDP-on-3Engines-Cloud.html.html#what-we-are-going-to-cover","title":"What We Are Going To Cover\ud83d\udd17","text":"<ul> <li>Creating an account on administrative level</li> <li>Configuring the account for remote access</li> </ul>"},{"location":"windows/Can-I-change-my-password-through-RDP-on-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Account</p> <p>You need a 3Engines Cloud hosting account with access to the Horizon interface: https://portal.3Engines.com/.</p> <p>No. 2 Windows VM</p> <p>You need a running Windows VM with Remote Access allowed, an Administrator account, and basic Windows knowledge.</p>"},{"location":"windows/Can-I-change-my-password-through-RDP-on-3Engines-Cloud.html.html#step-1-microsoft-management-console-mmc","title":"Step 1: Microsoft Management Console (mmc)\ud83d\udd17","text":"<p>Log in as administrator, click on the Windows icon and type \u201cmmc\u201d.</p> <p></p> <p>Confirm the question with \u201cYes\u201d, select File -&gt; Add/Remove Snap-in\u2026</p> <p></p> <p>Chose the snap-in \u201cLocal Users and Groups\u201d, click \u201cAdd &gt;\u201d, \u201cFinish\u201d, and \u201cOK\u201d in the successive windows.</p>"},{"location":"windows/Can-I-change-my-password-through-RDP-on-3Engines-Cloud.html.html#step-2-create-and-configure-a-user-account","title":"Step 2: Create and configure a user account\ud83d\udd17","text":"<p>Expand the snap-in and open the \u201cUsers\u201d folder. There are already some default accounts available. Right-click into the white area where the accounts are listed and select \u201cNew user\u2026\u201d from the menu. Provide a user name and a password, full name and description are optional. Deselect \u201cUser must change password at next logon\u201d and click \u201cCreate\u201d.</p> <p></p> <p>Right-click on the newly created account and select \u201cProperties\u201d.</p> <p></p> <p>Navigate to \u201cMember Of\u201d and click \u201cAdd\u2026\u201d</p> <p></p> <p>Click on \u201cAdvanced\u2026\u201d in the opening window, then \u201cFind Now\u201d. Select \u201cRemote Desktop Users\u201d in the search results, click \u201cOK\u201d twice.</p> <p></p> <p>If everything was done right, the selected group is now listed. Click \u201cApply\u201d, then \u201cOK\u201d.</p> <p></p>"},{"location":"windows/Can-I-change-my-password-through-RDP-on-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>You have successfully created a new user account and configured this account for remotely using the VM. You can now forward the credentials and ask the user to change the password after logging in.</p>"},{"location":"windows/Connecting-to-a-Windows-VM-via-RDP-through-a-Linux-bastion-host-port-forwarding-on-3Engines-Cloud.html.html","title":"Connecting to a Windows VM via RDP through a Linux bastion host port forwarding on 3Engines Cloud\ud83d\udd17","text":"<p>If you want to increase the security of your Windows VMs while connecting to them via RDP, you might want to use the method described in this article. It involves connecting to your Windows VM not directly through RDP, but through another virtual machine running Linux known as the \u201cbastion host\u201d. In this case, the RDP connection gets tunneled through SSH and is not directly visible to others.</p> <p>This method is especially useful if you fear that your RDP connection might be compromised or if using RDP without additional security measures is illegal. It also allows you to use a single floating IP address to connect to multiple Windows VMs.</p>"},{"location":"windows/Connecting-to-a-Windows-VM-via-RDP-through-a-Linux-bastion-host-port-forwarding-on-3Engines-Cloud.html.html#requirements","title":"Requirements:\ud83d\udd17","text":"<ul> <li>Linux virtual machine with SSH access - bastion host</li> <li> <p>Windows virtual machine located in the same network as the bastion host</p> </li> <li> <p>The private key downloaded from OpenStack dashboard converted from .pem to .ppk format (using \u201cPuTTYgen\u201d) - for information on how to do this please see How to access a VM from Windows PuTTY on 3Engines Cloud</p> </li> <li> <p>The password for the Administrator account has been changed via the OpenStack dashboard console</p> </li> <li>Your VMs are assigned the following security group: allow_ping_ssh_icmp_rdp</li> </ul> <p></p>"},{"location":"windows/Connecting-to-a-Windows-VM-via-RDP-through-a-Linux-bastion-host-port-forwarding-on-3Engines-Cloud.html.html#step-1-information-required-to-establish-connection-with-the-bastion-host","title":"Step 1. Information required to establish connection with the bastion host.\ud83d\udd17","text":"<p>Launch PuTTY and change the settings according to the instructions:</p> <p>Session tab: Provide the host (bastion) floating IP address and the SSH port (default 22).</p> <p></p> <p>Connection &gt; Data tab:\u00a0Set auto-login username as \u201ceouser\u201d.</p> <p></p> <p>Connection &gt; SSH &gt; Auth tab: Select the private key in the .ppk format.</p> <p></p> <p>Connection &gt; SSH &gt; Tunnels: Provide the source port for the localhost RDP connection and destination (in the following format: private IP address of Windows VM:RDP port - as seen on the screenshot below).</p> <p></p> <p>Click the \u201cAdd\u201d button to confirm the changes.</p> <p>Your forwarded port should now be visible in the upper tab.</p> <p></p> <p>Provide the name of the session and save your config to avoid repeating the whole process every time you would like to connect to your instance again.</p> <p></p>"},{"location":"windows/Connecting-to-a-Windows-VM-via-RDP-through-a-Linux-bastion-host-port-forwarding-on-3Engines-Cloud.html.html#step-2-open-connection-in-putty","title":"Step 2. Open connection in PuTTy\ud83d\udd17","text":"<p>Click \u201cOpen\u201d to establish the connection.</p> <p></p>"},{"location":"windows/Connecting-to-a-Windows-VM-via-RDP-through-a-Linux-bastion-host-port-forwarding-on-3Engines-Cloud.html.html#step-3-start-an-rdp-session-to-localhost-to-reach-the-destination-server","title":"Step 3. Start an RDP session to localhost to reach the destination server\ud83d\udd17","text":"<p>Set localhost address:port selected in step 2 (in this case it is either 127.0.0.1:8888 or localhost:8888 - you can choose whatever you prefer).</p> <p>Set the username as \u201cAdministrator\u201d.</p> <p></p> <p>Click \u201cConnect\u201d and enter your VM\u2019s administrator password (the one you\u2019ve set in the OpenStack console).</p> <p></p> <p>Confirm the connection in the certificate prompt.</p> <p></p> <p>That\u2019s it, you\u2019re now successfully connected to your Windows VM!</p> <p></p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-11-On-3Engines-Cloud.html.html","title":"How to Create SSH Key Pair in Windows 11 On 3Engines Cloud\ud83d\udd17","text":"<p>This guide will show you how to generate an SSH key pair in Windows 11 using OpenSSH. You will then be able to use that key pair to control appropriately configured virtual machines hosted on 3Engines Cloud cloud.</p> <p>This article only covers the basics of this function and assumes that you will not change the names of generated keys.</p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-11-On-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<p>No. 1 Local computer running Windows 11</p> <p>We assume that you have a local computer which runs Windows 11. This article does not cover the Windows Server family of operating systems.</p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-11-On-3Engines-Cloud.html.html#step-1-verify-whether-openssh-client-is-installed","title":"Step 1: Verify whether OpenSSH Client is installed\ud83d\udd17","text":"<p>Open the Command Prompt (cmd.exe).</p> <p></p> <p>Execute the following command and press Enter:</p> <pre><code>ssh\n</code></pre> <p>If SSH client is installed, the output should contain information about how to use it:</p> <p></p> <p>If, however, you got the following output:</p> <pre><code>'ssh' is not recognized as an internal or external command,\noperable program or batch file.\n</code></pre> <p>it means that SSH client is not installed on your machine.</p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-11-On-3Engines-Cloud.html.html#step-2-install-openssh","title":"Step 2: Install OpenSSH\ud83d\udd17","text":"<p>This step is only required if you don\u2019t have SSH client installed. If you do have it, skip to Step 3.</p> <p>Minimize the Command Prompt if you still have it open.</p> <p>Open the system Settings application and enter section System -&gt; Optional features</p> <p></p> <p>In section Add an optional feature, click View features.</p> <p> </p> <p>In text field Find an available optional feature, enter openssh</p> <p></p> <p>Two features should be displayed:</p> <ul> <li>OpenSSH Client which you can use to control other devices.</li> <li>OpenSSH Server which you can install to allow other devices to control your computer. This option is outside of scope of this article.</li> </ul> <p>Tick the checkbox next to OpenSSH Client and click Next</p> <p></p> <p>You should now get the following window:</p> <p></p> <p>Click Add</p> <p>Wait until the process is finished:</p> <p></p> <p>It might last several dozen minutes.</p> <p>Once it\u2019s over, you should see the confirmation that the component was Added</p> <p></p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-11-On-3Engines-Cloud.html.html#step-3-use-ssh-keygen-to-generate-an-ssh-key-pair","title":"Step 3: Use ssh-keygen to generate an SSH key pair\ud83d\udd17","text":"<p>Return to the Command Prompt you previously opened. Enter the following command to generate an SSH key pair:</p> <pre><code>ssh-keygen\n</code></pre> <p></p> <p>Of course, you can fine tune the security and other properties of this key pair during this process. However, if you\u2019re just getting started, you can simply accept default values by pressing Enter multiple times until the program finishes its operation and you are once again prompted for enterring the command.</p> <p></p> <p>Your key pair should now be generated.</p> <p>As of writing of this article, by default this process should create:</p> <ul> <li>a directory .ssh in your home directory, and in that directory:</li> </ul> <ul> <li>file id_ed25519 for secret key, and</li> <li>file named id_ed25519.pub for public key</li> </ul> <p>OpenSSH names these files based on algorithm used. As of writing of this article, the names of these files come from the Ed25519 algorithm. Previously, the RSA algorithm was used, and the files were by default called id_rsa and id_rsa.pub</p> <p>If in the future the default algorithm used by OpenSSH changes, the default names of keys will likely be different.</p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-11-On-3Engines-Cloud.html.html#step-4-see-generated-key-pair","title":"Step 4: See generated key pair\ud83d\udd17","text":"<p>Open the Run window by pressing the key combination Windows+R (if you are using a macOS keyboard, then Cmd+R)</p> <p>Enter in its text field:</p> <pre><code>%USERPROFILE%\\.ssh\n</code></pre> <p></p> <p>You should get to .ssh folder which is located in your account profile folder.</p> <p>You should there see your SSH keys:</p> <p></p> <p>In our example, these are two files:</p> <ul> <li>id_ed25519 which is our private key</li> <li>id_ed25519.pub which is our public key</li> </ul> <p>Note that public SSH key and Microsoft Publisher documents share the same extension - .pub</p> <p>Because of that, Windows might mistakenly mark your public SSH key as a Microsoft Publisher document, as was the case on screenshot above.</p> <p>If you want to see the full extensions of files, including .pub, click View on the task bar of the File Explorer. After that, click Show -&gt; File name extensions</p> <p> </p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-11-On-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>For Windows 10, see this guide: How to Create SSH Key Pair in Windows 10 On 3Engines Cloud</p> <p>To be able to easily add your new public key to VMs you might create in the future, upload it to OpenStack. Thanks to that, you will be able to use it to authenticate to VMs which support it.</p> <p>Learn more here:</p> <p>How to add SSH key from Horizon web console on 3Engines Cloud</p> <p>Once you\u2019ve done it, you can create a new virtual machine on 3Engines Cloud cloud and authenticate with your key pair:</p> <p>How to create a Linux VM and access it from Windows desktop on 3Engines Cloud</p> <p>The following articles cover how to connect to virtual machines via SSH once they\u2019ve already been created:</p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-On-3Engines-Cloud.html.html","title":"How to Create SSH Key Pair in Windows 10 On 3Engines Cloud\ud83d\udd17","text":"<p>This guide will show you how to generate an SSH key pair in Windows 10 using OpenSSH.</p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-On-3Engines-Cloud.html.html#prerequisites","title":"Prerequisites\ud83d\udd17","text":"<ul> <li>System running Windows 10 or Windows Server 2016-2022</li> <li>User account with administrative privileges</li> <li>Access to Windows command prompt</li> </ul>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-On-3Engines-Cloud.html.html#step-1-verify-if-openssh-client-is-installed","title":"Step 1: Verify if OpenSSH Client is Installed\ud83d\udd17","text":"<p>First, check to see if you have the OpenSSH client installed:</p> <ol> <li>Open the Settings panel, then click Apps.</li> <li>Under the Apps and Features heading, click Manage optional Features.</li> </ol> <p></p> <ol> <li> <p>Scroll down the list to see if OpenSSH Client is listed.</p> </li> <li> <p>If it\u2019s not, click the plus-sign next to Add a feature.</p> </li> <li>Scroll through the list to find and select OpenSSH Client.</li> <li>Finally, click Install.</li> </ol> <p></p> <p>This will install app called ssh-keygen.</p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-On-3Engines-Cloud.html.html#step-2-open-command-prompt","title":"Step 2: Open Command Prompt\ud83d\udd17","text":"<p>ssh-keygen runs from Windows Command Prompt, so the next step is to open it.</p> <ol> <li>Press the Windows key.</li> <li>Type cmd.</li> <li>Under Best Match, right-click Command Prompt.</li> <li>Click Run as Administrator.</li> </ol> <p></p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-On-3Engines-Cloud.html.html#step-3-use-openssh-to-generate-an-ssh-key-pair","title":"Step 3: Use OpenSSH to Generate an SSH Key Pair\ud83d\udd17","text":"<p>Finally, run ssh-keygen to generate the public and private keys for SSH access to the 3Engines Cloud server.</p> <ol> <li>In command prompt, type the following:</li> </ol> <pre><code>ssh-keygen\n</code></pre> <p></p> <p>Press ENTER three times. This will</p> <ul> <li>create folder /.ssh for the keys as well as</li> <li>file id_rsa for secret key and</li> <li>file id_rsa.pub for public key.</li> </ul> <p>These are the default values.</p> <p>Warning</p> <p>If you have created other keys in those same locations, you can define other folder and files instead of just pressing Enter three times.</p> <p></p> <p>To see the generated files, navigate to C:/Users//.ssh with your file explorer. <p></p> <p>The image shows default values of files for private and public keys, in files id_rsa and id_rsa.pub, respectively.</p>"},{"location":"windows/How-To-Create-SSH-Key-Pair-In-Windows-On-3Engines-Cloud.html.html#what-to-do-next","title":"What To Do Next\ud83d\udd17","text":"<p>For Windows 11, see this guide: How to Create SSH Key Pair in Windows 11 On 3Engines Cloud</p> <p>Put your public key on remote server and use your private key to authorize to your VM. To add the public key to remote server see</p> <p>How to add SSH key from Horizon web console on 3Engines Cloud</p> <p>To connect to the server from Windows:</p> <p>How to connect to a virtual machine via SSH from Windows 10 Command Prompt on 3Engines Cloud</p> <p>How to access a VM from Windows PuTTY on 3Engines Cloud</p>"},{"location":"windows/How-to-access-a-VM-from-Windows-PuTTY-on-3Engines-Cloud.html.html","title":"How to access a VM from Windows PuTTY on 3Engines Cloud\ud83d\udd17","text":"<p>The link below shows how to generate and add rsa key pairs:</p> <p>How to connect to a virtual machine via SSH from Windows 10 Command Prompt on 3Engines Cloud</p> <p>In this tutorial key.pem is equivalent to the id_rsa file that we obtain in a zip package after the key generation process.</p> <p>To connect via PuTTY, copy your Virtual Machine floating IP address and save it somewhere.</p> <p></p> <p>Open PuTTYGen to converse the private key file to ppk format. (This format is being ussed by the PuTTY client). Click on the \u201cLoad\u201d button.</p> <p></p> <p>Choose the key file. Make sure that you have set the visibility to \u201cAll files\u201d.</p> <p></p> <p>A prompt window informing you about succesful import will appear.</p> <p></p> <p>Save your imported private key in the ppk format.</p> <p></p> <p></p> <p>Open PuTTY Configuration tool and focus on the marked labels:</p> <p></p> <p>Description:</p> <ol> <li>Host Name( or IP address) \u2192 Write down the floating IP address that you may find in the Horizon Panel</li> <li>Port \u2192 Assign a SSH service port, by default it is set up on 22</li> <li>Connection type \u2192 Check SSH</li> </ol> <p>Configuration has been set up. Enroll the SSH branch.</p> <p></p> <p>Enroll the Auth branch and provide a private key file by clicking \u201cBrowse\u201d, selecting your key and clicking \u201cOpen\u201d.</p> <p></p> <p></p> <p>(Optionally) Expand the \u201cConnection\u201d list and click on the \u201cData\u201d.</p> <p>Set Auto-login username: eouser.</p> <p></p> <p>For your comfort you can save the session for future use by naming it and saving changes.</p> <p></p> <p>Choose the proper session and click on the \u201cOpen\u201d button to commence the ssh session:</p> <p></p> <p>If you are connecting to your VM via PuTTY for the first time, we recommend that you save the rsa key fingerprint by choosing Yes (Tak) for future connections from your computer.</p> <p></p> <p>If you logged in correctly you should see the following at the bottom of the screen:</p> <pre><code>eouser@yourInstanceName:~$\n</code></pre> <p>You are now correctly logged into your VM via SSH from another host.</p> <p></p> <p>If you would like to learn more about PuTTYgen, its installation and usage, visit the website https://www.puttygen.com.</p>"},{"location":"windows/How-to-connect-to-a-virtual-machine-via-SSH-from-Windows-10-Command-Prompt-on-3Engines-Cloud.html.html","title":"How to connect to a virtual machine via SSH from Windows 10 Command Prompt on 3Engines Cloud\ud83d\udd17","text":""},{"location":"windows/How-to-connect-to-a-virtual-machine-via-SSH-from-Windows-10-Command-Prompt-on-3Engines-Cloud.html.html#requirements","title":"Requirements\ud83d\udd17","text":"<p>The private and public keys were created and saved on the local disk of your computer. (How to create key pair in OpenStack Dashboard on 3Engines Cloud)</p> <p>During the virtual machine creation procedure, the generated key was attached. (How to create new Linux VM in OpenStack Dashboard Horizon on 3Engines Cloud)</p> <p>A floating IP was assigned to your VM. (How to Add or Remove Floating IP\u2019s to your VM on 3Engines Cloud)</p> <p>Check in \u201cInstalled features\u201d if the OpenSSH client is installed, if not click Add a feature, search for OpenSSH client and install it.</p> <p></p>"},{"location":"windows/How-to-connect-to-a-virtual-machine-via-SSH-from-Windows-10-Command-Prompt-on-3Engines-Cloud.html.html#step-1-go-to-the-folder-containing-your-ssh-keys","title":"Step 1 Go to the folder containing your SSH keys\ud83d\udd17","text":"<p>Run the Command Prompt and change the current folder to the folder where you store your keys.</p> <p>For example:</p> <pre><code>cd c:\\Users\\wikit\\sshkeys\n</code></pre>"},{"location":"windows/How-to-connect-to-a-virtual-machine-via-SSH-from-Windows-10-Command-Prompt-on-3Engines-Cloud.html.html#step-2-connect-to-your-vm-using-ssh","title":"Step 2 Connect to your VM using SSH\ud83d\udd17","text":"<p>If the name of your key is id_rsa and the floating IP of your virtual machine is 64.225.129.203, type the following command:</p> <pre><code>ssh -i id_rsa [email\u00a0protected]\n</code></pre> <p>If the text before the cursor changed to eouser@test (assuming the name of your virtual machine is test), the connection was successfully established. Before that, you may get the message that the authenticity of the host can\u2019t be established and the following question:</p> <pre><code>Are you sure you want to continue connecting (yes/no/[fingerprint])?\n</code></pre> <p>If you got that message, it typically means that your computer has never connected to your VM via SSH before and you should confirm that you are willing to connect by typing \u201cyes\u201d and pressing Enter.</p> <p>You should now be able to issue commands to your VM:</p> <p></p>"},{"location":"windows/windows.html.html","title":"Windows Management","text":""},{"location":"windows/windows.html.html#available-documentation","title":"Available Documentation","text":"<ul> <li>How to access a VM from Windows PuTTY on 3Engines Cloud</li> <li>Connecting to a Windows VM via RDP through a Linux bastion host port forwarding on 3Engines Cloud</li> <li>How to connect to a virtual machine via SSH from Windows 10 Command Prompt on 3Engines Cloud</li> <li>How to Create SSH Key Pair in Windows 10 On 3Engines Cloud</li> <li>Can I change my password through RDP on 3Engines Cloud?</li> <li>How to Create SSH Key Pair in Windows 11 On 3Engines Cloud</li> </ul>"}]}