link icon replaced

This commit is contained in:
govardhan
2025-06-19 14:09:10 +05:30
parent 60adbde60c
commit 172f8e2b34
158 changed files with 996 additions and 996 deletions

View File

@ -1,12 +1,12 @@
Bucket sharing using s3 bucket policy on CloudFerro Cloud[](#bucket-sharing-using-s3-bucket-policy-on-brand-name "Permalink to this headline")
Bucket sharing using s3 bucket policy on CloudFerro Cloud[🔗](#bucket-sharing-using-s3-bucket-policy-on-brand-name "Permalink to this headline")
===============================================================================================================================================
S3 bucket policy[](#s3-bucket-policy "Permalink to this headline")
S3 bucket policy[🔗](#s3-bucket-policy "Permalink to this headline")
-------------------------------------------------------------------
**Ceph** - the Software Defined Storage used in CloudFerro Cloud cloud, providing object storage compatibility with a subset of Amazon S3 API. Bucket policy in Ceph is part of the S3 API and allows for a selective access sharing to object storage buckets between users of different projects, in the same cloud.
Naming conventions used in this document[](#naming-conventions-used-in-this-document "Permalink to this headline")
Naming conventions used in this document[🔗](#naming-conventions-used-in-this-document "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------
Bucket Owner
@ -26,7 +26,7 @@ Tenant Admin
In code examples, values typed in all-capital letters, such as BUCKET\_OWNER\_PROJECT\_ID, are placeholders which should be replaced with actual values matching your use-case.
Limitations[](#limitations "Permalink to this headline")
Limitations[🔗](#limitations "Permalink to this headline")
---------------------------------------------------------
It is possible to grant access at the project level only, not at the user level. In order to grant access to an individual user,
@ -37,12 +37,12 @@ Ceph S3 implementation
> * supports the following S3 actions by setting bucket policy but
> * does not support user, role or group policies.
S3cmd CONFIGURATION[](#s3cmd-configuration "Permalink to this headline")
S3cmd CONFIGURATION[🔗](#s3cmd-configuration "Permalink to this headline")
-------------------------------------------------------------------------
To share bucket using S3 bucket policy you have to configure s3cmd first using this tutorial [How to access private object storage using S3cmd or boto3 on CloudFerro Cloud](How-to-access-private-object-storage-using-S3cmd-or-boto3-on-CloudFerro-Cloud.html.md)
Declaring bucket policy[](#declaring-bucket-policy "Permalink to this headline")
Declaring bucket policy[🔗](#declaring-bucket-policy "Permalink to this headline")
---------------------------------------------------------------------------------
Important
@ -54,7 +54,7 @@ The code in this article will work only if the value of **Version** parameter is
```
### Policy JSON files sections[](#policy-json-file-s-sections "Permalink to this headline")
### Policy JSON files sections[🔗](#policy-json-file-s-sections "Permalink to this headline")
Bucket policy is declared using a JSON file. It can be created using editors such as **vim** or **nano**. Here is an example policy JSON template:
@ -98,7 +98,7 @@ ACTION
PROJECT\_ID
: Project ID
### List of actions[](#list-of-actions "Permalink to this headline")
### List of actions[🔗](#list-of-actions "Permalink to this headline")
```
s3:AbortMultipartUpload
@ -140,7 +140,7 @@ s3:PutObjectVersionAcl
```
### KEY\_SPECIFICATION[](#key-specification "Permalink to this headline")
### KEY\_SPECIFICATION[🔗](#key-specification "Permalink to this headline")
It defines a bucket and its keys/objects. For example:
@ -151,7 +151,7 @@ It defines a bucket and its keys/objects. For example:
```
### Conditions[](#conditions "Permalink to this headline")
### Conditions[🔗](#conditions "Permalink to this headline")
Additional conditions to filter access to the bucket. For example you can grant access to the specific IP Address using:
@ -175,7 +175,7 @@ or, alternatively, you can permit access to the specific IP using:
```
SETTING A POLICY ON THE BUCKET[](#setting-a-policy-on-the-bucket "Permalink to this headline")
SETTING A POLICY ON THE BUCKET[🔗](#setting-a-policy-on-the-bucket "Permalink to this headline")
-----------------------------------------------------------------------------------------------
The policy may be set on a bucket using:
@ -199,10 +199,10 @@ s3cmd delpolicy s3://MY_SHARED_BUCKET
```
Sample scenarios[](#sample-scenarios "Permalink to this headline")
Sample scenarios[🔗](#sample-scenarios "Permalink to this headline")
-------------------------------------------------------------------
### 1 Grant read/write access to a Bucket User using his **PROJECT\_ID**[](#grant-read-write-access-to-a-bucket-user-using-his-project-id "Permalink to this headline")
### 1 Grant read/write access to a Bucket User using his **PROJECT\_ID**[🔗](#grant-read-write-access-to-a-bucket-user-using-his-project-id "Permalink to this headline")
A Bucket Owner wants to grant a bucket a read/write access to a Bucket User, using his **PROJECT\_ID**:
@ -249,7 +249,7 @@ s3cmd ls s3://MY_SHARED_BUCKET
```
### 2 Limit read/write access to a Bucket to users accessing from specific IP address range[](#limit-read-write-access-to-a-bucket-to-users-accessing-from-specific-ip-address-range "Permalink to this headline")
### 2 Limit read/write access to a Bucket to users accessing from specific IP address range[🔗](#limit-read-write-access-to-a-bucket-to-users-accessing-from-specific-ip-address-range "Permalink to this headline")
A Bucket Owner wants to grant read/write access to Bucket Users which access the bucket from specific IP ranges.

View File

@ -1,4 +1,4 @@
Configuration files for s3cmd command on CloudFerro Cloud[](#configuration-files-for-s3cmd-command-on-brand-name "Permalink to this headline")
Configuration files for s3cmd command on CloudFerro Cloud[🔗](#configuration-files-for-s3cmd-command-on-brand-name "Permalink to this headline")
===============================================================================================================================================
[s3cmd](https://github.com/s3tools/s3cmd) can access remote data using the S3 protocol. This includes **EODATA** repository and object storage on the CloudFerro Cloud cloud.

View File

@ -1,9 +1,9 @@
How to Install Boto3 in Windows on CloudFerro Cloud[](#how-to-install-boto3-in-windows-on-brand-name "Permalink to this headline")
How to Install Boto3 in Windows on CloudFerro Cloud[🔗](#how-to-install-boto3-in-windows-on-brand-name "Permalink to this headline")
===================================================================================================================================
**boto3** library for Python serves for listing and downloading items from specified bucket or repository. In this article, you will install it in a Windows system.
Step 1 Ensure That Python3 is Preinstalled[](#step-1-ensure-that-python3-is-preinstalled "Permalink to this headline")
Step 1 Ensure That Python3 is Preinstalled[🔗](#step-1-ensure-that-python3-is-preinstalled "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------
**On a Desktop Windows System**
@ -18,7 +18,7 @@ Virtual machines created in the CloudFerro Cloud cloud will have Python3 already
2. Use or create a new instance in the cloud. See article: [Connecting to a Windows VM via RDP through a Linux bastion host port forwarding on CloudFerro Cloud](../windows/Connecting-to-a-Windows-VM-via-RDP-through-a-Linux-bastion-host-port-forwarding-on-CloudFerro-Cloud.html.md).
Step 2 Install boto3 on Windows[](#step-2-install-boto3-on-windows "Permalink to this headline")
Step 2 Install boto3 on Windows[🔗](#step-2-install-boto3-on-windows "Permalink to this headline")
-------------------------------------------------------------------------------------------------
In order to install boto3 on Windows:

View File

@ -1,4 +1,4 @@
How to access object storage from CloudFerro Cloud using boto3[](#how-to-access-object-storage-from-brand-name-using-boto3 "Permalink to this headline")
How to access object storage from CloudFerro Cloud using boto3[🔗](#how-to-access-object-storage-from-brand-name-using-boto3 "Permalink to this headline")
=========================================================================================================================================================
In this article, you will learn how to access object storage from CloudFerro Cloud using Python library **boto3**.

View File

@ -1,9 +1,9 @@
How to access object storage from CloudFerro Cloud using s3cmd[](#how-to-access-object-storage-from-brand-name-using-s3cmd "Permalink to this headline")
How to access object storage from CloudFerro Cloud using s3cmd[🔗](#how-to-access-object-storage-from-brand-name-using-s3cmd "Permalink to this headline")
=========================================================================================================================================================
In this article, you will learn how to access object storage from CloudFerro Cloud on Linux using [s3cmd](https://github.com/s3tools/s3cmd), without mounting it as a file system. This can be done on a virtual machine on CloudFerro Cloud cloud or on a local Linux computer.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Object storage vs. standard file system
@ -20,7 +20,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Checking how much storage is being used on a container
> * Removing the entire container
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**

View File

@ -1,4 +1,4 @@
How to access private object storage using S3cmd or boto3 on CloudFerro Cloud[](#how-to-access-private-object-storage-using-s3cmd-or-boto3-on-brand-name "Permalink to this headline")
How to access private object storage using S3cmd or boto3 on CloudFerro Cloud[🔗](#how-to-access-private-object-storage-using-s3cmd-or-boto3-on-brand-name "Permalink to this headline")
=======================================================================================================================================================================================
LEGACY ARTICLE

View File

@ -1,4 +1,4 @@
How to Delete Large S3 Bucket on CloudFerro Cloud[](#how-to-delete-large-s3-bucket-on-brand-name "Permalink to this headline")
How to Delete Large S3 Bucket on CloudFerro Cloud[🔗](#how-to-delete-large-s3-bucket-on-brand-name "Permalink to this headline")
===============================================================================================================================
**Introduction**

View File

@ -1,4 +1,4 @@
How to install s3cmd on Linux on CloudFerro Cloud[](#how-to-install-s3cmd-on-linux-on-brand-name "Permalink to this headline")
How to install s3cmd on Linux on CloudFerro Cloud[🔗](#how-to-install-s3cmd-on-linux-on-brand-name "Permalink to this headline")
===============================================================================================================================
In this article you will learn how to install [s3cmd](https://github.com/s3tools/s3cmd) on Linux. **s3cmd** can be used, among other things, to:
@ -8,13 +8,13 @@ In this article you will learn how to install [s3cmd](https://github.com/s3tools
without mounting these resources as a file system.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Installing **s3cmd** using **apt**
> * Uninstalling **s3cmd**
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**

View File

@ -1,9 +1,9 @@
How to Mount Object Storage Container as a File System in Linux Using s3fs on CloudFerro Cloud[](#how-to-mount-object-storage-container-as-a-file-system-in-linux-using-s3fs-on-brand-name "Permalink to this headline")
How to Mount Object Storage Container as a File System in Linux Using s3fs on CloudFerro Cloud[🔗](#how-to-mount-object-storage-container-as-a-file-system-in-linux-using-s3fs-on-brand-name "Permalink to this headline")
=========================================================================================================================================================================================================================
The following article covers mounting of object storage containers using **s3fs** on Linux. One of possible use cases is having easy access to content of such containers on different computers and virtual machines. For access, you can use your local Linux computer or virtual machines running on CloudFerro Cloud cloud. All users of the operating system should have read, write and execute privileges on contents of these containers.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Installing **s3fs**
@ -16,7 +16,7 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Stopping automatic mounting of a container
> * Potential problems with the way s3fs handles objects
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -47,12 +47,12 @@ No. 5 **Knowledge of the Linux command line**
Basic knowledge of the Linux command line is required.
Step 1: Sign in to your Linux machine[](#step-1-sign-in-to-your-linux-machine "Permalink to this headline")
Step 1: Sign in to your Linux machine[🔗](#step-1-sign-in-to-your-linux-machine "Permalink to this headline")
------------------------------------------------------------------------------------------------------------
Sign in to an Ubuntu account which has **sudo** privileges. If you are using SSH to connect to a virtual machine running on CloudFerro Cloud cloud, the username will likely be **eouser**.
Step 2: Install s3fs[](#step-2-install-s3fs "Permalink to this headline")
Step 2: Install s3fs[🔗](#step-2-install-s3fs "Permalink to this headline")
--------------------------------------------------------------------------
First, check if **s3fs** is installed on your machine. Enter the following command in the terminal:
@ -76,7 +76,7 @@ sudo apt update && sudo apt upgrade && sudo apt install s3fs
```
Step 3: Create file or files containing login credentials[](#step-3-create-file-or-files-containing-login-credentials "Permalink to this headline")
Step 3: Create file or files containing login credentials[🔗](#step-3-create-file-or-files-containing-login-credentials "Permalink to this headline")
----------------------------------------------------------------------------------------------------------------------------------------------------
In this article, we are going to use plain text files for storing S3 credentials - access and secret keys. (If you dont have the credentials yet, follow Prerequisite No. 4.) Each file can store one such pair and can be used to mount all object storage containers to which that key pair provides access.
@ -101,7 +101,7 @@ chmod 600 ~/.passwd-s3fs
```
Step 4: Create mount points[](#step-4-create-mount-points "Permalink to this headline")
Step 4: Create mount points[🔗](#step-4-create-mount-points "Permalink to this headline")
----------------------------------------------------------------------------------------
The files inside your object storage container should appear inside a folder of your choice. Such a folder will be called *mount point* in this article. You can use an empty folder from your file system for that purpose. You can also create new folder(s) to use as mount points.
@ -113,7 +113,7 @@ sudo mkdir /mnt/mount-point
```
Step 5: Mount a container[](#step-5-mount-a-container "Permalink to this headline")
Step 5: Mount a container[🔗](#step-5-mount-a-container "Permalink to this headline")
------------------------------------------------------------------------------------
Here is a typical command to mount a container:
@ -169,7 +169,7 @@ This is what executing the **ls** command from the mount point could produce:
To mount multiple containers, repeat the **s3fs** command with relevant parameters, as many times as needed.
Unmounting a container[](#unmounting-a-container "Permalink to this headline")
Unmounting a container[🔗](#unmounting-a-container "Permalink to this headline")
-------------------------------------------------------------------------------
To unmount a container, first make sure that the content of your object storage is not in use by any application on your system, including terminals and file managers. After that, execute the following command, replacing **/mnt/mount-point** with the mount point of your object storage container:
@ -179,7 +179,7 @@ sudo umount -lf /mnt/mount-point
```
Configuring automatic mounting of your object storage[](#configuring-automatic-mounting-of-your-object-storage "Permalink to this headline")
Configuring automatic mounting of your object storage[🔗](#configuring-automatic-mounting-of-your-object-storage "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------------------------------
Here is how to configure automatic mounting of your object storage containers after system startup.
@ -225,7 +225,7 @@ Append such a line for every container you wish to have automatically mounted.
Reboot your VM and check whether the mounting was successful by navigating to each mount point and making sure that the files from those object storage containers are there.
Stopping automatic mounting of a container[](#stopping-automatic-mounting-of-a-container "Permalink to this headline")
Stopping automatic mounting of a container[🔗](#stopping-automatic-mounting-of-a-container "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------
If you no longer want your containers to be automatically mounted, first make sure that each of them is not in use by any application on your system, including terminals and file managers.
@ -252,7 +252,7 @@ Save the file and exit the text editor.
You can now reboot your virtual machine to check if the containers are indeed no longer being mounted.
Potential problems with the way s3fs handles objects[](#potential-problems-with-the-way-s3fs-handles-objects "Permalink to this headline")
Potential problems with the way s3fs handles objects[🔗](#potential-problems-with-the-way-s3fs-handles-objects "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------
**s3fs** attempts to translate object storage to a file system and most of the time is successful. However, sometimes it might not be possible. One of potential problems with **s3fs** comes from the fact that object storage allows a folder and file to share the same name in the same location, which is outright impossible in normal operating systems.
@ -271,7 +271,7 @@ To prevent this issue, invent and use consistent file system conventions while u
Another potential problem is that some changes to the object storage might not be immediately visible in file system created by **s3fs**. Wait a bit and double check to see whether that is the case.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
You can also access object storage from CloudFerro Cloud without mounting it as a file system.

View File

@ -1,4 +1,4 @@
How to mount object storage container from CloudFerro Cloud as file system on local Windows computer[](#how-to-mount-object-storage-container-from-brand-name-as-file-system-on-local-windows-computer "Permalink to this headline")
How to mount object storage container from CloudFerro Cloud as file system on local Windows computer[🔗](#how-to-mount-object-storage-container-from-brand-name-as-file-system-on-local-windows-computer "Permalink to this headline")
=====================================================================================================================================================================================================================================
This article describes how to configure direct access to object storage containers from CloudFerro Cloud cloud in **This PC** window on your local Windows computer. Such containers will be mounted as network drives, for example:
@ -7,7 +7,7 @@ This article describes how to configure direct access to object storage containe
You will configure mounting using an account which has administrative privileges obtained using UAC (User Account Control). After this process, the container should be also be available on accounts which do not have such administrative privileges.
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**

View File

@ -1,9 +1,9 @@
How to use Object Storage on CloudFerro Cloud[](#how-to-use-object-storage-on-brand-name "Permalink to this headline")
How to use Object Storage on CloudFerro Cloud[🔗](#how-to-use-object-storage-on-brand-name "Permalink to this headline")
=======================================================================================================================
Object storage on CloudFerro Cloud cloud can be used to store your files in *containers*. In this article, you will create a basic container and perform basic operations on it, using a web browser.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Create a new object storage container
@ -15,14 +15,14 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> * Enabling or disabling public access to object storage containers
> * Using a public link
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
Creating a new object storage container[](#creating-a-new-object-storage-container "Permalink to this headline")
Creating a new object storage container[🔗](#creating-a-new-object-storage-container "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------
Login to the Horizon dashboard. Navigate to the following section: **Object Store > Containers**.
@ -81,7 +81,7 @@ You may encounter the following error:
The reason for it might be that you are trying to create an object storage container which has the same name as another container. Try using a different name.
Viewing the container[](#viewing-the-container "Permalink to this headline")
Viewing the container[🔗](#viewing-the-container "Permalink to this headline")
-----------------------------------------------------------------------------
To view the content of the container, click its name on the list:
@ -90,7 +90,7 @@ To view the content of the container, click its name on the list:
You should see files in the container. Initially, it should be empty. You can now create folders and upload files to this container.
Creating a new folder[](#creating-a-new-folder "Permalink to this headline")
Creating a new folder[🔗](#creating-a-new-folder "Permalink to this headline")
-----------------------------------------------------------------------------
To create a new folder, click button: ![new-folder](_images/use-object-storage-new-folder_creodias.png). You should get the following form:
@ -108,7 +108,7 @@ Adding forward slash in the beginning of such directory structure is optional. T
Click **Create Folder** to confirm.
Navigating through folders[](#navigating-through-folders "Permalink to this headline")
Navigating through folders[🔗](#navigating-through-folders "Permalink to this headline")
---------------------------------------------------------------------------------------
To navigate to another folder on your object storage container, click its name. Folder names will be written in blue and in the **Size** column, the word **Folder** will be shown.
@ -121,7 +121,7 @@ That would be directory **another-folder**, inside the **second-folder** directo
Click the name of the folder you want to go to.
Uploading a file[](#uploading-a-file "Permalink to this headline")
Uploading a file[🔗](#uploading-a-file "Permalink to this headline")
-------------------------------------------------------------------
To upload a file to your object storage container, click the ![upload-file](_images/use-object-storage-upload_creodias.png) button. You should get the following window:
@ -157,10 +157,10 @@ Warning
Having two files or two folders of the same name in the same directory is impossible. Having a file and folder under the same name in the same directory (extension is considered part of the name here) may lead to problems so it is best to avoid it.
Deleting files and folders from a container[](#deleting-files-and-folders-from-a-container "Permalink to this headline")
Deleting files and folders from a container[🔗](#deleting-files-and-folders-from-a-container "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------
### Deleting one file[](#deleting-one-file "Permalink to this headline")
### Deleting one file[🔗](#deleting-one-file "Permalink to this headline")
To delete a file from container, open the drop-down menu next to the **Download** button.
@ -174,11 +174,11 @@ You should get the following request for confirmation:
Click **Delete** to confirm. Your file should be deleted.
### Deleting one folder[](#deleting-one-folder "Permalink to this headline")
### Deleting one folder[🔗](#deleting-one-folder "Permalink to this headline")
If you want to delete a folder and its contents, click the ![delete-folder](_images/use-object-storage-delete-folder_creodias.png) button next to it. You should get the similar request for confirmation as previously. Like before, click **Delete** to confirm.
### Deleting multiple files and/or folders[](#deleting-multiple-files-and-or-folders "Permalink to this headline")
### Deleting multiple files and/or folders[🔗](#deleting-multiple-files-and-or-folders "Permalink to this headline")
If you want to delete multiple files and/or folders at the same time, use checkboxes on the left of the list to select the ones you want to remove, for example:
@ -190,15 +190,15 @@ You can also select all files and folders on a page by clicking the checkbox abo
To delete selected items, click the ![delete-selected](_images/use-object-storage-delete-delete-selected_creodias.png) button to the right of the button used to create new folders. In this case you should also get the similar request for confirmation. Click **Delete** to confirm.
Recommended number of files in your object storage containers[](#recommended-number-of-files-in-your-object-storage-containers "Permalink to this headline")
Recommended number of files in your object storage containers[🔗](#recommended-number-of-files-in-your-object-storage-containers "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------------------------------------------
It is recommended that you do not have more than 1 000 000 (one million) files and folders in one object storage container since it will make listing them inefficient. If you want to store a large number of files, use multiple object storage containers for that purpose.
Working with public object storage containers[](#working-with-public-object-storage-containers "Permalink to this headline")
Working with public object storage containers[🔗](#working-with-public-object-storage-containers "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------
### Enabling or disabling public access to object storage containers[](#enabling-or-disabling-public-access-to-object-storage-containers "Permalink to this headline")
### Enabling or disabling public access to object storage containers[🔗](#enabling-or-disabling-public-access-to-object-storage-containers "Permalink to this headline")
During the creation of your object storage container you had an option to set whether it should be accessible by the public or not. If you wish to change that setting later, first find the name of the container you wish to modify in the container list.
@ -210,7 +210,7 @@ Check or uncheck the **Public Access** checkbox depending on whether you wish to
If you enabled **Public Access**, a link to your object storage container will be provided.
### Using a public link[](#using-a-public-link "Permalink to this headline")
### Using a public link[🔗](#using-a-public-link "Permalink to this headline")
Once you have created a public link, enter it into the browser. You should see a list of all files and folders in your container, for example:
@ -238,7 +238,7 @@ Warning
If you share a link to one file from an object storage container, the recipient will be able to create download links for all other files on that object storage container. Obviously, this could be a security risk.
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
Now that you have created your object storage container you can mount it on the platform of your choice for easier access. There are many ways to do that, for instance:

View File

@ -1,4 +1,4 @@
S3 bucket object versioning on CloudFerro Cloud[](#s3-bucket-object-versioning-on-brand-name "Permalink to this headline")
S3 bucket object versioning on CloudFerro Cloud[🔗](#s3-bucket-object-versioning-on-brand-name "Permalink to this headline")
===========================================================================================================================
S3 bucket versioning allows you to keep different versions of the file stored on object storage. Here are some typical use cases:
@ -16,7 +16,7 @@ In this article, you will learn how to
> * download different versions of files and
> * set up automatic removal of previous versions.
Prerequisites[](#prerequisites "Permalink to this headline")
Prerequisites[🔗](#prerequisites "Permalink to this headline")
-------------------------------------------------------------
No. 1 **Account**
@ -48,7 +48,7 @@ No. 5 **Terminology: container vs. bucket**
In this article, both “container” and “bucket” represent the same category of resources hosted on CloudFerro Cloud cloud. The former term is more often used by the Horizon dashboard and the latter term is more often used by AWS CLI.
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
What We Are Going To Cover[🔗](#what-we-are-going-to-cover "Permalink to this headline")
---------------------------------------------------------------------------------------
> * Configuring and testing AWS CLI
@ -83,12 +83,12 @@ What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this h
> > + Bucket on which versioning has never been enabled
> > + Suspending of versioning
Configuring and testing AWS CLI[](#configuring-and-testing-aws-cli "Permalink to this headline")
Configuring and testing AWS CLI[🔗](#configuring-and-testing-aws-cli "Permalink to this headline")
-------------------------------------------------------------------------------------------------
Now follows how to configure AWS CLI for the first time; if it has been configured before, you might need to adjust the configuration according to the needs of this article.
### Step 1: Configure AWS CLI[](#step-1-configure-aws-cli "Permalink to this headline")
### Step 1: Configure AWS CLI[🔗](#step-1-configure-aws-cli "Permalink to this headline")
To configure AWS CLI, create a folder called **.aws** in your home directory:
@ -137,7 +137,7 @@ Save the file and exit the text editor..
The commands we provide in this article will have the appropriate endpoint already included, via the **--endpoint-url** parameter, and all you need to do is to select the command for the cloud that you are using.
### Step 2: Verify that AWS CLI is working[](#step-2-verify-that-aws-cli-is-working "Permalink to this headline")
### Step 2: Verify that AWS CLI is working[🔗](#step-2-verify-that-aws-cli-is-working "Permalink to this headline")
Execute command **list-buckets** to list buckets:
@ -198,7 +198,7 @@ Note
In this article, colors have been added to make JSON more legible. AWS CLI typically does not output colored text.
Assigning bucket names to shell variables[](#assigning-bucket-names-to-shell-variables "Permalink to this headline")
Assigning bucket names to shell variables[🔗](#assigning-bucket-names-to-shell-variables "Permalink to this headline")
---------------------------------------------------------------------------------------------------------------------
To differentiate between different buckets used in various examples of this article, we will use the following shell variables:
@ -244,7 +244,7 @@ aws s3api create-bucket \
```
### Making sure that bucket names are unique[](#making-sure-that-bucket-names-are-unique "Permalink to this headline")
### Making sure that bucket names are unique[🔗](#making-sure-that-bucket-names-are-unique "Permalink to this headline")
If **single tenancy** is enabled on the cloud you are using, the name of your bucket needs to be unique for the entire cloud. Buckets called **versioning-test**, **examplebucket** etc. may well already exist.
If that is the case, the output from **create-bucket** command will look like this:
@ -295,7 +295,7 @@ Reusing the existing buckets
Avoid using the existing buckets
: Go through the article without previous baggage, using a “clean slate” approach. This is what you would normally do when using the article for the very first time.
Creating a bucket without versioning[](#creating-a-bucket-without-versioning "Permalink to this headline")
Creating a bucket without versioning[🔗](#creating-a-bucket-without-versioning "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------
Command **create-bucket** will create a bucket under your chosen name (variable **$bucket\_name1**).
@ -332,7 +332,7 @@ aws s3api create-bucket \
The output of this command should be empty if everything went well.
Enabling versioning on a bucket[](#enabling-versioning-on-a-bucket "Permalink to this headline")
Enabling versioning on a bucket[🔗](#enabling-versioning-on-a-bucket "Permalink to this headline")
-------------------------------------------------------------------------------------------------
To enable versioning on the bucket **$bucket\_name1**, use command **put-bucket-versioning**:
@ -377,7 +377,7 @@ On Amazon Web Services, the presence of parameter **MFADelete** increases securi
The output of this command should also be empty.
Uploading file[](#uploading-file "Permalink to this headline")
Uploading file[🔗](#uploading-file "Permalink to this headline")
---------------------------------------------------------------
Lets say that we upload a file to the root directory of our container. Let the name of the file be **something.txt** and let it have the following content: **This is version 1**.
@ -446,7 +446,7 @@ This upload created the first version of the file. The ID of this version is **w
The output also provides an **ETag** key, which is a hash of the object you uploaded.
S3 paths[](#s3-paths "Permalink to this headline")
S3 paths[🔗](#s3-paths "Permalink to this headline")
---------------------------------------------------
The parameter **--key** from **put-object** command may also be used in other commands to reference an already uploaded bucket in the container.
@ -543,7 +543,7 @@ aws s3api put-object \
```
Uploading another version of a file[](#uploading-another-version-of-a-file "Permalink to this headline")
Uploading another version of a file[🔗](#uploading-another-version-of-a-file "Permalink to this headline")
---------------------------------------------------------------------------------------------------------
Let us now return to **$bucket\_name1**
@ -680,10 +680,10 @@ Owner
In the example above, the bucket still contains only one file - **something.txt**. This upload overwrote it with a new version, but the previous version is still present.
Listing available versions of a file[](#listing-available-versions-of-a-file "Permalink to this headline")
Listing available versions of a file[🔗](#listing-available-versions-of-a-file "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------
### Example 1: One file, two versions[](#example-1-one-file-two-versions "Permalink to this headline")
### Example 1: One file, two versions[🔗](#example-1-one-file-two-versions "Permalink to this headline")
To list the available versions of files in this bucket, use **list-object-versions**:
@ -756,7 +756,7 @@ The output could look like this:
It contains two versions we created previously. Each has its own ID, which is the value of parameter **VersionId**:
Table 5 Key vs. VersionId[](#id1 "Permalink to this table")
Table 5 Key vs. VersionId[🔗](#id1 "Permalink to this table")
| | |
| --- | --- |
@ -766,7 +766,7 @@ Table 5 Key vs. VersionId[](#id1 "Permalink to this table")
Both of them are tied to the same file called **something.txt**.
### Example 2: Multiple files, multiple versions[](#example-2-multiple-files-multiple-versions "Permalink to this headline")
### Example 2: Multiple files, multiple versions[🔗](#example-2-multiple-files-multiple-versions "Permalink to this headline")
Let us now consider an alternative situation in which we have two files, and one of them has two versions.
@ -820,7 +820,7 @@ The output of **list-object-versions** could then look like this:
```
Table 6 Key vs. VersionId[](#id2 "Permalink to this table")
Table 6 Key vs. VersionId[🔗](#id2 "Permalink to this table")
| | |
| --- | --- |
@ -831,7 +831,7 @@ Table 6 Key vs. VersionId[](#id2 "Permalink to this table")
File **something1.txt** has one version, while file **something2.txt** has two versions..
Downloading a chosen version of the file[](#downloading-a-chosen-version-of-the-file "Permalink to this headline")
Downloading a chosen version of the file[🔗](#downloading-a-chosen-version-of-the-file "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------
Let us return to **$bucket\_name1**.
@ -914,7 +914,7 @@ Displaying its contents with the **cat** command reveals that it is indeed the f
![s3-bucket-versioning-05_creodias.png](../_images/s3-bucket-versioning-05_creodias.png)
Deleting objects on version-enabled buckets[](#deleting-objects-on-version-enabled-buckets "Permalink to this headline")
Deleting objects on version-enabled buckets[🔗](#deleting-objects-on-version-enabled-buckets "Permalink to this headline")
-------------------------------------------------------------------------------------------------------------------------
AWS CLI includes command **delete-object** which is used to delete files stored on buckets. It behaves differently depending on the circumstances:
@ -930,7 +930,7 @@ The version to be deleted is specified
Here are the examples for version-enabled buckets.
### Setting up a deletion marker[](#setting-up-a-deletion-marker "Permalink to this headline")
### Setting up a deletion marker[🔗](#setting-up-a-deletion-marker "Permalink to this headline")
The command to delete files from buckets is **delete-object**.
@ -1120,7 +1120,7 @@ Within the Horizon dashboard, the file is also “invisible”:
Even though the file cannot be seen, the size of the bucket is still displayed correctly - 36 bytes. Each stored version of each file adds to the total size.
### Removing the deletion marker[](#removing-the-deletion-marker "Permalink to this headline")
### Removing the deletion marker[🔗](#removing-the-deletion-marker "Permalink to this headline")
To restore the visibility of a file, delete its deletion marker by issuing command **delete-object** and specify the **VersionID** of the deletion marker:
@ -1310,7 +1310,7 @@ The file should now also be visible in Horizon again:
That on this screenshot, the visible file has size 18 bytes, whereas the total size of this bucket is 36 bytes. This is because the total size includes both stored versions of the file.
### Permanently removing files from version-enabled bucket[](#permanently-removing-files-from-version-enabled-bucket "Permalink to this headline")
### Permanently removing files from version-enabled bucket[🔗](#permanently-removing-files-from-version-enabled-bucket "Permalink to this headline")
You can delete versions of file stored in the bucket just like you can delete the previously mentioned delete marker.
@ -1510,7 +1510,7 @@ we will see that there are no files or versions there:
```
Using lifecycle policy to configure automatic deletion of previous versions of files[](#using-lifecycle-policy-to-configure-automatic-deletion-of-previous-versions-of-files "Permalink to this headline")
Using lifecycle policy to configure automatic deletion of previous versions of files[🔗](#using-lifecycle-policy-to-configure-automatic-deletion-of-previous-versions-of-files "Permalink to this headline")
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
“Noncurrent version” is any version of a file which is not the latest. In this section, we will cover how to configure automatic deletion of these versions after a specified amount of days.
@ -1519,7 +1519,7 @@ For this purpose, we will use function called “lifecycle policy”.
This example covers configuring automatic removal of noncurrent versions of a file 1 day after a newer version of the same file has been uploaded.
### Preparing the testing environment[](#preparing-the-testing-environment "Permalink to this headline")
### Preparing the testing environment[🔗](#preparing-the-testing-environment "Permalink to this headline")
For testing, create bucket whose name is stored in variable **$bucket\_name3** and enable versioning:
@ -1837,7 +1837,7 @@ confirms that one of the files has two versions while the other only has one ver
Contrary to other versions of files stored here, the first version of file **mycode.py** has **false** under the key **IsLatest**. This shows that this is not the latest version of that file.
### Setting up automatic removal of previous versions[](#setting-up-automatic-removal-of-previous-versions "Permalink to this headline")
### Setting up automatic removal of previous versions[🔗](#setting-up-automatic-removal-of-previous-versions "Permalink to this headline")
The lifecycle policy is written in JSON. Create file named **noncurrent-policy.json** in your current working directory (doesnt have to be the location of the file which contains your login credentials) and enter the following code into it:
@ -2022,7 +2022,7 @@ reveals that the version of file **mycode.py** which is not the latest was delet
```
### Deleting lifecycle policy[](#deleting-lifecycle-policy "Permalink to this headline")
### Deleting lifecycle policy[🔗](#deleting-lifecycle-policy "Permalink to this headline")
Command **delete-bucket-lifecycle** deletes bucket lifecycle policy. This is how to do it on bucket **$bucket\_name3**.
@ -2099,12 +2099,12 @@ argument of type 'NoneType' is not iterable
The policy should now no longer apply.
Suspending versioning[](#suspending-versioning "Permalink to this headline")
Suspending versioning[🔗](#suspending-versioning "Permalink to this headline")
-----------------------------------------------------------------------------
If you no longer want to store multiple versions of files, you can suspend the versioning.
### Bucket on which versioning has never been enabled[](#bucket-on-which-versioning-has-never-been-enabled "Permalink to this headline")
### Bucket on which versioning has never been enabled[🔗](#bucket-on-which-versioning-has-never-been-enabled "Permalink to this headline")
To better understand how it works, let us start with a bucket in which versioning has never been enabled in the first place.
@ -2112,7 +2112,7 @@ On such a bucket, every file will only have one version which has one and the sa
If you upload another file under the same name, its **VersionID** will also be **null**, and will replace the previously uploaded file.
#### Example[](#example "Permalink to this headline")
#### Example[🔗](#example "Permalink to this headline")
For this example, we will create bucket **$bucket\_name4** on which versioning has never been enabled.
@ -2148,7 +2148,7 @@ aws s3api create-bucket \
Buckets can, of course, contain files of various type. For the sake of this example, suppose that the bucket contains the following three files of various types:
Table 7 File vs. the editor[](#id3 "Permalink to this table")
Table 7 File vs. the editor[🔗](#id3 "Permalink to this table")
| | |
| --- | --- |
@ -2464,7 +2464,7 @@ we should get the output like this:
Once again, there are three files, each with exactly one version stored. The file **script.sh** was overwritten during our upload - its parameters **Size**, **ETag** and **LastModified** (timestamp of last modification) have changed.
### Suspending of versioning[](#suspending-of-versioning "Permalink to this headline")
### Suspending of versioning[🔗](#suspending-of-versioning "Permalink to this headline")
When you suspend the versioning, your bucket will start behaving similarly to a bucket on which versioning has never been enabled. All files uploaded from that moment on will have **null** as their **VersionId**.
@ -2551,7 +2551,7 @@ Upload a few files to this bucket with **put-object** command. Make sure that at
In this example, our bucket contains the following files:
Table 8 Key vs. VersionId[](#id4 "Permalink to this table")
Table 8 Key vs. VersionId[🔗](#id4 "Permalink to this table")
| | |
| --- | --- |
@ -2980,7 +2980,7 @@ After this upload, we list versions one more time and get the following output:
Once again, there is only one version which has **null** as its ID - the upload overwrote the previous version. The date of last modification (**LastModified**) has changed. Its previous value was **2024-09-16T11:31:01.968Z** and now it is **2024-09-16T11:34:25.528Z**
What To Do Next[](#what-to-do-next "Permalink to this headline")
What To Do Next[🔗](#what-to-do-next "Permalink to this headline")
-----------------------------------------------------------------
AWS CLI is not the only available way of interacting with object storage. Other ways include:

View File

@ -1,7 +1,7 @@
Server-Side Encryption with Customer-Managed Keys (SSE-C) on CloudFerro Cloud[](#server-side-encryption-with-customer-managed-keys-sse-c-on-brand-name "Permalink to this headline")
Server-Side Encryption with Customer-Managed Keys (SSE-C) on CloudFerro Cloud[🔗](#server-side-encryption-with-customer-managed-keys-sse-c-on-brand-name "Permalink to this headline")
=====================================================================================================================================================================================
Introduction[](#introduction "Permalink to this headline")
Introduction[🔗](#introduction "Permalink to this headline")
-----------------------------------------------------------
This guide explains how to encrypt your objects server-side with SSE-C.
@ -10,7 +10,7 @@ Server-side encryption is a way to protecting data at rest. SSE encrypts only th
SSE-C is working as on the moment of uploading an object. Server uses the encryption key you provide to apply AES-256 encryption to data and removes the encryption key from memory. To access the data again you must provide the same encryption key on the request. Server verifies whether provided key matches and then decrypts the object before returning the object data to you.
Requirements[](#requirements "Permalink to this headline")
Requirements[🔗](#requirements "Permalink to this headline")
-----------------------------------------------------------
* A bucket ([How to use Object Storage on CloudFerro Cloud](How-to-use-Object-Storage-on-CloudFerro-Cloud.html.md))
@ -50,7 +50,7 @@ Attention
If you lose encryption key it means the object is also lost. Our servers do not store encryption keys, so it is not possible to access the data again without them.
REST API[](#rest-api "Permalink to this headline")
REST API[🔗](#rest-api "Permalink to this headline")
---------------------------------------------------
To encrypt or decrypt objects in SSE-C mode the following headers are required:
@ -76,7 +76,7 @@ Headers apply to the following API operations:
> * UploadPart
> * UploadPart-Copy (to target parts)
Example No 1 Generate header values[](#example-no-1-generate-header-values "Permalink to this headline")
Example No 1 Generate header values[🔗](#example-no-1-generate-header-values "Permalink to this headline")
---------------------------------------------------------------------------------------------------------
```
@ -95,7 +95,7 @@ keymd5=$(cat sse-c.key | openssl dgst -md5 -binary | base64)
```
Example No 2 aws-cli (s3api)[](#example-no-2-aws-cli-s3api "Permalink to this headline")
Example No 2 aws-cli (s3api)[🔗](#example-no-2-aws-cli-s3api "Permalink to this headline")
-----------------------------------------------------------------------------------------
Upload an object with SSE-C encryption enabled
@ -111,7 +111,7 @@ aws s3api put-object \
```
Example No 3 aws-cli (s3)[](#example-no-3-aws-cli-s3 "Permalink to this headline")
Example No 3 aws-cli (s3)[🔗](#example-no-3-aws-cli-s3 "Permalink to this headline")
-----------------------------------------------------------------------------------
```
@ -122,7 +122,7 @@ aws s3 cp file.txt s3://bucket-name/ \
```
Example No 4 aws-cli (s3 blob)[](#example-no-4-aws-cli-s3-blob "Permalink to this headline")
Example No 4 aws-cli (s3 blob)[🔗](#example-no-4-aws-cli-s3-blob "Permalink to this headline")
---------------------------------------------------------------------------------------------
```
@ -137,7 +137,7 @@ Note
At the moment **s3cmd** does not support SSE-C encryption.
Downloading the encrypted object[](#downloading-the-encrypted-object "Permalink to this headline")
Downloading the encrypted object[🔗](#downloading-the-encrypted-object "Permalink to this headline")
---------------------------------------------------------------------------------------------------
```

View File

@ -2,7 +2,7 @@
## Available Documentation
* [Policy JSON files sections[](#policy-json-file-s-sections "Permalink to this headline")](Bucket-sharing-using-s3-bucket-policy-on-CloudFerro-Cloud.html.md)
* [Policy JSON files sections[🔗](#policy-json-file-s-sections "Permalink to this headline")](Bucket-sharing-using-s3-bucket-policy-on-CloudFerro-Cloud.html.md)
* [Configuration files for s3cmd command on CloudFerro Cloud](Configuration-files-for-s3cmd-command-on-CloudFerro-Cloud.html.md)
* [How To Install boto3 In Windows on CloudFerro Cloud](How-To-Install-boto3-In-Windows-on-CloudFerro-Cloud.html.md)
* [How to access object storage from CloudFerro Cloud using boto3](How-to-access-object-storage-from-CloudFerro-Cloud-using-boto3.html.md)
@ -12,6 +12,6 @@
* [How to install s3cmd on Linux on CloudFerro Cloud](How-to-install-s3cmd-on-Linux-on-CloudFerro-Cloud.html.md)
* [How to mount object storage container as a file system in Linux using s3fs on CloudFerro Cloud](How-to-mount-object-storage-container-as-a-file-system-in-Linux-using-s3fs-on-CloudFerro-Cloud.html.md)
* [How to mount object storage container from CloudFerro Cloud as file system on local Windows computer](How-to-mount-object-storage-container-from-CloudFerro-Cloud-as-file-system-on-local-Windows-computer.html.md)
* [Deleting one file[](#deleting-one-file "Permalink to this headline")](How-to-use-Object-Storage-on-CloudFerro-Cloud.html.md)
* [Step 1: Configure AWS CLI[](#step-1-configure-aws-cli "Permalink to this headline")](S3-bucket-object-versioning-on-CloudFerro-Cloud.html.md)
* [Deleting one file[🔗](#deleting-one-file "Permalink to this headline")](How-to-use-Object-Storage-on-CloudFerro-Cloud.html.md)
* [Step 1: Configure AWS CLI[🔗](#step-1-configure-aws-cli "Permalink to this headline")](S3-bucket-object-versioning-on-CloudFerro-Cloud.html.md)
* [Server Side Encryption with Customer Managed Keys SSE C on CloudFerro Cloud](Server-Side-Encryption-with-Customer-Managed-Keys-SSE-C-on-CloudFerro-Cloud.html.md)