before changing links
This commit is contained in:
@ -0,0 +1,292 @@
|
||||
Bucket sharing using s3 bucket policy on CloudFerro Cloud[](#bucket-sharing-using-s3-bucket-policy-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================
|
||||
|
||||
S3 bucket policy[](#s3-bucket-policy "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
**Ceph** - the Software Defined Storage used in CloudFerro Cloud cloud, providing object storage compatibility with a subset of Amazon S3 API. Bucket policy in Ceph is part of the S3 API and allows for a selective access sharing to object storage buckets between users of different projects, in the same cloud.
|
||||
|
||||
Naming conventions used in this document[](#naming-conventions-used-in-this-document "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Bucket Owner
|
||||
: OpenStack tenant who created an object storage bucket in their project, intending to share to their bucket or a subset of objects in the bucket to another tenant in the same cloud.
|
||||
|
||||
Bucket User
|
||||
: OpenStack tenant who wants to gain access to a Bucket Owner’s object storage bucket.
|
||||
|
||||
Bucket Owner’s Project
|
||||
: A project in which a shared bucket is created.
|
||||
|
||||
Bucket User’s Project
|
||||
: A project which gets access to Bucket Owner’s object storage bucket.
|
||||
|
||||
Tenant Admin
|
||||
: A tenant’s administrator user who can create OpenStack projects and manage users and roles within their domain.
|
||||
|
||||
In code examples, values typed in all-capital letters, such as BUCKET\_OWNER\_PROJECT\_ID, are placeholders which should be replaced with actual values matching your use-case.
|
||||
|
||||
Limitations[](#limitations "Permalink to this headline")
|
||||
---------------------------------------------------------
|
||||
|
||||
It is possible to grant access at the project level only, not at the user level. In order to grant access to an individual user,
|
||||
Bucket User’s Tenant Admin must create a separate project within their domain, which only selected users will be granted access to.
|
||||
|
||||
Ceph S3 implementation
|
||||
|
||||
> * supports the following S3 actions by setting bucket policy but
|
||||
> * does not support user, role or group policies.
|
||||
|
||||
S3cmd CONFIGURATION[](#s3cmd-configuration "Permalink to this headline")
|
||||
-------------------------------------------------------------------------
|
||||
|
||||
To share bucket using S3 bucket policy you have to configure s3cmd first using this tutorial [How to access private object storage using S3cmd or boto3 on CloudFerro Cloud](How-to-access-private-object-storage-using-S3cmd-or-boto3-on-CloudFerro-Cloud.html)
|
||||
|
||||
Declaring bucket policy[](#declaring-bucket-policy "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
Important
|
||||
|
||||
The code in this article will work only if the value of **Version** parameter is
|
||||
|
||||
```
|
||||
"Version": "2012-10-17",
|
||||
|
||||
```
|
||||
|
||||
### Policy JSON file’s sections[](#policy-json-file-s-sections "Permalink to this headline")
|
||||
|
||||
Bucket policy is declared using a JSON file. It can be created using editors such as **vim** or **nano**. Here is an example policy JSON template:
|
||||
|
||||
```
|
||||
{
|
||||
"Id": "POLICY_ID",
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "STATEMENT_NAME",
|
||||
"Action": [
|
||||
"s3:ACTION_1",
|
||||
"s3:ACTION_2"
|
||||
],
|
||||
"Effect": "EFFECT",
|
||||
"Resource": "arn:aws:s3:::KEY_SPECIFICATION",
|
||||
"Condition": {
|
||||
"CONDITION_1": {
|
||||
}
|
||||
},
|
||||
"Principal": {
|
||||
"AWS": [
|
||||
"arn:aws:iam::PROJECT_ID:root"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
POLICY\_ID
|
||||
: ID of your policy.
|
||||
|
||||
STATEMENT\_NAME
|
||||
: Name of your statement.
|
||||
|
||||
ACTION
|
||||
: Actions that you grant access to bucket user to perform on the bucket.
|
||||
|
||||
PROJECT\_ID
|
||||
: Project ID
|
||||
|
||||
### List of actions[](#list-of-actions "Permalink to this headline")
|
||||
|
||||
```
|
||||
s3:AbortMultipartUpload
|
||||
s3:CreateBucket
|
||||
s3:DeleteBucketPolicy
|
||||
s3:DeleteBucket
|
||||
s3:DeleteBucketWebsite
|
||||
s3:DeleteObject
|
||||
s3:DeleteObjectVersion
|
||||
s3:GetBucketAcl
|
||||
s3:GetBucketCORS
|
||||
s3:GetBucketLocation
|
||||
s3:GetBucketPolicy
|
||||
s3:GetBucketRequestPayment
|
||||
s3:GetBucketVersioning
|
||||
s3:GetBucketWebsite
|
||||
s3:GetLifecycleConfiguration
|
||||
s3:GetObjectAcl
|
||||
s3:GetObject
|
||||
s3:GetObjectTorrent
|
||||
s3:GetObjectVersionAcl
|
||||
s3:GetObjectVersion
|
||||
s3:GetObjectVersionTorrent
|
||||
s3:ListAllMyBuckets
|
||||
s3:ListBucketMultiPartUploads
|
||||
s3:ListBucket
|
||||
s3:ListBucketVersions
|
||||
s3:ListMultipartUploadParts
|
||||
s3:PutBucketAcl
|
||||
s3:PutBucketCORS
|
||||
s3:PutBucketPolicy
|
||||
s3:PutBucketRequestPayment
|
||||
s3:PutBucketVersioning
|
||||
s3:PutBucketWebsite
|
||||
s3:PutLifecycleConfiguration
|
||||
s3:PutObjectAcl
|
||||
s3:PutObject
|
||||
s3:PutObjectVersionAcl
|
||||
|
||||
```
|
||||
|
||||
### KEY\_SPECIFICATION[](#key-specification "Permalink to this headline")
|
||||
|
||||
It defines a bucket and its keys/objects. For example:
|
||||
|
||||
```
|
||||
"arn:aws:s3:::*" - the bucket and all of its objects
|
||||
"arn:aws:s3:::MY_SHARED_BUCKET/*" - all objects of mybucket
|
||||
"arn:aws:s3:::MY_SHARED_BUCKET/myfolder/*" - all objects which are subkeys to myfolder in mybucket
|
||||
|
||||
```
|
||||
|
||||
### Conditions[](#conditions "Permalink to this headline")
|
||||
|
||||
Additional conditions to filter access to the bucket. For example you can grant access to the specific IP Address using:
|
||||
|
||||
```
|
||||
"Condition": {
|
||||
"IpAddress": {
|
||||
"aws:SourceIp": "USER_IP_ADRESS/32"
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
or, alternatively, you can permit access to the specific IP using:
|
||||
|
||||
```
|
||||
"Condition": {
|
||||
"NotIpAddress": {
|
||||
"aws:SourceIp": "PERMITTED_USER_IP_ADRESS/32"
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
SETTING A POLICY ON THE BUCKET[](#setting-a-policy-on-the-bucket "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
The policy may be set on a bucket using:
|
||||
|
||||
```
|
||||
s3cmd setpolicy POLICY_JSON_FILE s3://MY_SHARED_BUCKET command.
|
||||
|
||||
```
|
||||
|
||||
To check policy on a bucket, use the following command:
|
||||
|
||||
```
|
||||
s3cmd info s3://MY_SHARED_BUCKET
|
||||
|
||||
```
|
||||
|
||||
The policy may be deleted from the bucket using:
|
||||
|
||||
```
|
||||
s3cmd delpolicy s3://MY_SHARED_BUCKET
|
||||
|
||||
```
|
||||
|
||||
Sample scenarios[](#sample-scenarios "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
### 1 Grant read/write access to a Bucket User using his **PROJECT\_ID**[](#grant-read-write-access-to-a-bucket-user-using-his-project-id "Permalink to this headline")
|
||||
|
||||
A Bucket Owner wants to grant a bucket a read/write access to a Bucket User, using his **PROJECT\_ID**:
|
||||
|
||||
```
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Id": "read-write",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "project-read-write",
|
||||
"Effect": "Allow",
|
||||
"Principal": {
|
||||
"AWS": [
|
||||
"arn:aws:iam::BUCKET_OWNER_PROJECT_ID:root",
|
||||
"arn:aws:iam::BUCKET_USER_PROJECT_ID:root"
|
||||
]
|
||||
},
|
||||
"Action": [
|
||||
"s3:ListBucket",
|
||||
"s3:PutObject",
|
||||
"s3:DeleteObject",
|
||||
"s3:GetObject"
|
||||
],
|
||||
"Resource": [
|
||||
"arn:aws:s3:::*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Let’s assume that the file with this policy is named “read-write-policy.json”. To apply it, Bucket Owner should issue:
|
||||
|
||||
```
|
||||
s3cmd setpolicy read-write-policy.json s3://MY_SHARED_BUCKET
|
||||
|
||||
```
|
||||
|
||||
Then, to access the bucket, for example list the bucket, Bucket User should issue:
|
||||
|
||||
```
|
||||
s3cmd ls s3://MY_SHARED_BUCKET
|
||||
|
||||
```
|
||||
|
||||
### 2 – Limit read/write access to a Bucket to users accessing from specific IP address range[](#limit-read-write-access-to-a-bucket-to-users-accessing-from-specific-ip-address-range "Permalink to this headline")
|
||||
|
||||
A Bucket Owner wants to grant read/write access to Bucket Users which access the bucket from specific IP ranges.
|
||||
|
||||
(In this case, we are setting AWS to “\*” which will theoretically grant access to every Project in CloudFerro Cloud, then however we are going to filter access to only one IP)
|
||||
|
||||
```
|
||||
{
|
||||
"Id": "Policy1654675551882",
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "Stmt1654675545682",
|
||||
"Action": [
|
||||
"s3:GetObject",
|
||||
"s3:PutObject"
|
||||
],
|
||||
"Effect": "Allow",
|
||||
"Resource": "arn:aws:s3:::MY_SHARED_BUCKET/*",
|
||||
"Condition": {
|
||||
"IpAddress": {
|
||||
"aws:SourceIp": "IP_ADRESS/32"
|
||||
}
|
||||
},
|
||||
"Principal": {
|
||||
"AWS": [
|
||||
"*"
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Let’s assume that the file with this policy is named “read-write-policy-ip.json”. To apply it, Bucket Owner should issue:
|
||||
|
||||
```
|
||||
s3cmd setpolicy read-write-policy-ip.json s3://MY_SHARED_BUCKET
|
||||
|
||||
```
|
||||
@ -0,0 +1,14 @@
|
||||
Configuration files for s3cmd command on CloudFerro Cloud[](#configuration-files-for-s3cmd-command-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================================
|
||||
|
||||
[s3cmd](https://github.com/s3tools/s3cmd) can access remote data using the S3 protocol. This includes **EODATA** repository and object storage on the CloudFerro Cloud cloud.
|
||||
|
||||
To connect to S3 storage, **s3cmd** uses several parameters, such as an access key, secret key, S3 endpoint, and others. During configuration, you can enter this data interactively, and the command saves it into a configuration file. This file can then be passed to **s3cmd** when issuing commands using the connection described within.
|
||||
|
||||
If you want to use multiple connections from a single virtual machine (such as connecting both to the **EODATA** repository and to object storage on CloudFerro Cloud cloud), you can create and store multiple configuration files — one per connection.
|
||||
|
||||
This article provides examples of how to create and save these configuration files under various circumstances and describes some potential problems you may encounter.
|
||||
|
||||
The examples are **not** intended to be executed sequentially as part of a workflow; instead, they illustrate different use cases of **s3cmd** operations.
|
||||
|
||||
What We Are Going To Cover
|
||||
@ -0,0 +1,52 @@
|
||||
How to Install Boto3 in Windows on CloudFerro Cloud[](#how-to-install-boto3-in-windows-on-brand-name "Permalink to this headline")
|
||||
===================================================================================================================================
|
||||
|
||||
**boto3** library for Python serves for listing and downloading items from specified bucket or repository. In this article, you will install it in a Windows system.
|
||||
|
||||
Step 1 Ensure That Python3 is Preinstalled[](#step-1-ensure-that-python3-is-preinstalled "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
**On a Desktop Windows System**
|
||||
|
||||
To run **boto3**, you need to have Python preinstalled. If you are running Windows on a desktop computer, the first step of this article shows how to do it: [How to install OpenStackClient GitBash for Windows on CloudFerro Cloud](../openstackcli/How-to-install-OpenStackClient-GitBash-or-Cygwin-for-Windows-on-CloudFerro-Cloud.html).
|
||||
|
||||
**On a Virtual Machine Running in CloudFerro Cloud Cloud**
|
||||
|
||||
Virtual machines created in the CloudFerro Cloud cloud will have Python3 already preinstalled. If you want to spawn your own Windows VM, two steps will be involved:
|
||||
|
||||
1. Log into your CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
2. Use or create a new instance in the cloud. See article: [Connecting to a Windows VM via RDP through a Linux bastion host port forwarding on CloudFerro Cloud](../windows/Connecting-to-a-Windows-VM-via-RDP-through-a-Linux-bastion-host-port-forwarding-on-CloudFerro-Cloud.html).
|
||||
|
||||
Step 2 Install boto3 on Windows[](#step-2-install-boto3-on-windows "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------
|
||||
|
||||
In order to install boto3 on Windows:
|
||||
|
||||
> * Log in as administrator.
|
||||
> * Click on the Windows icon in the bottom left of your Desktop.
|
||||
> * Find Command prompt by entering **cmd** abbreviation.
|
||||
|
||||

|
||||
|
||||
Verify that you have up-to-date Python installed by entering “python -V”.
|
||||
|
||||

|
||||
|
||||
Then install boto3 with the following command:
|
||||
|
||||
```
|
||||
pip install boto3
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
Verify your installation, with command:
|
||||
|
||||
```
|
||||
pip show boto3
|
||||
|
||||
```
|
||||
|
||||

|
||||
@ -0,0 +1,6 @@
|
||||
How to access object storage from CloudFerro Cloud using boto3[](#how-to-access-object-storage-from-brand-name-using-boto3 "Permalink to this headline")
|
||||
=========================================================================================================================================================
|
||||
|
||||
In this article, you will learn how to access object storage from CloudFerro Cloud using Python library **boto3**.
|
||||
|
||||
What We Are Going To Cover
|
||||
@ -0,0 +1,38 @@
|
||||
How to access object storage from CloudFerro Cloud using s3cmd[](#how-to-access-object-storage-from-brand-name-using-s3cmd "Permalink to this headline")
|
||||
=========================================================================================================================================================
|
||||
|
||||
In this article, you will learn how to access object storage from CloudFerro Cloud on Linux using [s3cmd](https://github.com/s3tools/s3cmd), without mounting it as a file system. This can be done on a virtual machine on CloudFerro Cloud cloud or on a local Linux computer.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Object storage vs. standard file system
|
||||
> * Terminology: container and bucket
|
||||
> * Configuring s3cmd
|
||||
> * S3 paths in s3cmd
|
||||
> * Listing containers
|
||||
> * Creating a container
|
||||
> * Uploading a file to a container
|
||||
> * Listing files and directories of the root directory of a container
|
||||
> * Listing files and directories not in the root directory of a container
|
||||
> * Removing a file from a container
|
||||
> * Downloading a file from a container
|
||||
> * Checking how much storage is being used on a container
|
||||
> * Removing the entire container
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Generated EC2 credentials**
|
||||
|
||||
You need generate EC2 credentials. Learn more here: [How to generate and manage EC2 credentials on CloudFerro Cloud](../cloud/How-to-generate-ec2-credentials-on-CloudFerro-Cloud.html)
|
||||
|
||||
No. 3 **A Linux computer or virtual machine**
|
||||
|
||||
You need a Linux virtual machine or local computer. This article was written for Ubuntu 22.04. Other operating systems might work, but are out of scope of this article and might require adjusting of commands.
|
||||
|
||||
If you want to use a virtual machine hosted on CloudFerro Cloud cloud and you don’t have it yet, one of these articles can help:
|
||||
@ -0,0 +1,238 @@
|
||||
How to access private object storage using S3cmd or boto3 on CloudFerro Cloud[](#how-to-access-private-object-storage-using-s3cmd-or-boto3-on-brand-name "Permalink to this headline")
|
||||
=======================================================================================================================================================================================
|
||||
|
||||
LEGACY ARTICLE
|
||||
|
||||
This article is marked as a legacy document and may not reflect the latest information.
|
||||
Please refer to the following articles:
|
||||
|
||||
[How to access object storage from CloudFerro Cloud using boto3](How-to-access-object-storage-from-CloudFerro-Cloud-using-boto3.html)
|
||||
|
||||
[How to access object storage from CloudFerro Cloud using s3cmd](How-to-access-object-storage-from-CloudFerro-Cloud-using-s3cmd.html)
|
||||
|
||||
**Introduction**
|
||||
|
||||
Private object storage (buckets within user’s project) can be used in various ways. For example, to access files located in object storage, buckets can be mounted and used as a file system using **s3fs**. Other tools which can be used to achieve better performance are **S3cmd** (command line tool) and **boto3** (AWS SDK for Python).
|
||||
|
||||
**S3cmd**
|
||||
|
||||
In order to acquire access to Object Storage buckets via S3cmd, first you have to generate your own EC2 credentials with this tutorial [How to generate and manage EC2 credentials on CloudFerro Cloud](../cloud/How-to-generate-ec2-credentials-on-CloudFerro-Cloud.html).
|
||||
|
||||
Once EC2 credentials are generated, ensure that your instance or local machine is equipped with S3cmd:
|
||||
|
||||
```
|
||||
s3cmd --version
|
||||
|
||||
```
|
||||
|
||||
If not, S3cmd can be installed with:
|
||||
|
||||
```
|
||||
apt install s3cmd
|
||||
|
||||
```
|
||||
|
||||
Now S3cmd can be configured with the following command:
|
||||
|
||||
```
|
||||
s3cmd --configure
|
||||
|
||||
```
|
||||
|
||||
Input and confirm (by pressing Enter) the following values:
|
||||
|
||||
WAW4-1WAW3-1WAW3-2FRA1-2
|
||||
|
||||
```
|
||||
New settings:
|
||||
Access Key: (your EC2 credentials)
|
||||
Secret Key: (your EC2 credentials)
|
||||
Default Region: default
|
||||
S3 Endpoint: s3.waw4-1.cloudferro.com
|
||||
DNS-style bucket+hostname:port template for accessing a bucket: s3.waw4-1.cloudferro.com
|
||||
Encryption password: (your password)
|
||||
Path to GPG program: /usr/bin/gpg
|
||||
Use HTTPS protocol: Yes
|
||||
HTTP Proxy server name:
|
||||
HTTP Proxy server port: 0
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
New settings:
|
||||
Access Key: (your EC2 credentials)
|
||||
Secret Key: (your EC2 credentials)
|
||||
Default Region: waw3-1
|
||||
S3 Endpoint: s3.waw3-1.cloudferro.com
|
||||
DNS-style bucket+hostname:port template for accessing a bucket: s3.waw3-1.cloudferro.com
|
||||
Encryption password: (your password)
|
||||
Path to GPG program: /usr/bin/gpg
|
||||
Use HTTPS protocol: Yes
|
||||
HTTP Proxy server name:
|
||||
HTTP Proxy server port: 0
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
New settings:
|
||||
Access Key: (your EC2 credentials)
|
||||
Secret Key: (your EC2 credentials)
|
||||
Default Region: default
|
||||
S3 Endpoint: s3.waw3-2.cloudferro.com
|
||||
DNS-style bucket+hostname:port template for accessing a bucket: s3.waw3-2.cloudferro.com
|
||||
Encryption password: (your password)
|
||||
Path to GPG program: /usr/bin/gpg
|
||||
Use HTTPS protocol: Yes
|
||||
HTTP Proxy server name:
|
||||
HTTP Proxy server port: 0
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
New settings:
|
||||
Access Key: (your EC2 credentials)
|
||||
Secret Key: (your EC2 credentials)
|
||||
Default Region: default
|
||||
S3 Endpoint: s3.fra1-2.cloudferro.com
|
||||
DNS-style bucket+hostname:port template for accessing a bucket: s3.fra1-2.cloudferro.co
|
||||
Encryption password: (your password)
|
||||
Path to GPG program: /usr/bin/gpg
|
||||
Use HTTPS protocol: Yes
|
||||
HTTP Proxy server name:
|
||||
HTTP Proxy server port: 0
|
||||
|
||||
```
|
||||
|
||||
After this operation, you should be allowed to list and access your Object Storage.
|
||||
|
||||
List your buckets with:
|
||||
|
||||
```
|
||||
eouser@vm01:$ s3cmd ls
|
||||
2022-02-02 22:22 s3://bucket
|
||||
|
||||
```
|
||||
|
||||
To see available commands for S3cmd, type the following command:
|
||||
|
||||
```
|
||||
s3cmd -h
|
||||
|
||||
```
|
||||
|
||||
**boto3**
|
||||
|
||||
Warning
|
||||
|
||||
We strongly recommend using virtualenv for isolating python packages. Configuration tutorial is this: [How to install Python virtualenv or virtualenvwrapper on CloudFerro Cloud](../cloud/How-to-install-Python-virtualenv-or-virtualenvwrapper-on-CloudFerro-Cloud.html)
|
||||
|
||||
If virtualenv is activated:
|
||||
|
||||
```
|
||||
(myvenv) eouser@vm01:~$ pip3 install boto3
|
||||
|
||||
```
|
||||
|
||||
Or if we install the package globally:
|
||||
|
||||
```
|
||||
eouser@vm01:~$ sudo pip3 install boto3
|
||||
|
||||
```
|
||||
|
||||
**Simple script for accessing your private bucket:**
|
||||
|
||||
WAW4-1WAW3-1WAW3-2FRA1-2
|
||||
|
||||
```
|
||||
import boto3
|
||||
|
||||
def boto3connection(access_key,secret_key,bucketname):
|
||||
host='https://s3.waw4-1.cloudferro.com'
|
||||
s3=boto3.resource('s3',aws_access_key_id=access_key,
|
||||
aws_secret_access_key=secret_key, endpoint_url=host,)
|
||||
|
||||
bucket=s3.Bucket(bucketname)
|
||||
for obj in bucket.objects.filter():
|
||||
print('{0}:{1}'.format(bucket.name, obj.key))
|
||||
|
||||
#For Python3
|
||||
x = input('Enter your access key:')
|
||||
y = input('Enter your secret key:')
|
||||
z = input('Enter your bucket name:')
|
||||
|
||||
boto3connection(x,y,z)
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
import boto3
|
||||
|
||||
def boto3connection(access_key,secret_key,bucketname):
|
||||
host='https://s3.waw3-1.cloudferro.com'
|
||||
s3=boto3.resource('s3',aws_access_key_id=access_key,
|
||||
aws_secret_access_key=secret_key, endpoint_url=host,)
|
||||
|
||||
bucket=s3.Bucket(bucketname)
|
||||
for obj in bucket.objects.filter():
|
||||
print('{0}:{1}'.format(bucket.name, obj.key))
|
||||
|
||||
#For Python3
|
||||
x = input('Enter your access key:')
|
||||
y = input('Enter your secret key:')
|
||||
z = input('Enter your bucket name:')
|
||||
|
||||
boto3connection(x,y,z)
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
import boto3
|
||||
|
||||
def boto3connection(access_key,secret_key,bucketname):
|
||||
host='https://s3.waw3-2.cloudferro.com'
|
||||
s3=boto3.resource('s3',aws_access_key_id=access_key,
|
||||
aws_secret_access_key=secret_key, endpoint_url=host,)
|
||||
|
||||
bucket=s3.Bucket(bucketname)
|
||||
for obj in bucket.objects.filter():
|
||||
print('{0}:{1}'.format(bucket.name, obj.key))
|
||||
|
||||
#For Python3
|
||||
x = input('Enter your access key:')
|
||||
y = input('Enter your secret key:')
|
||||
z = input('Enter your bucket name:')
|
||||
|
||||
boto3connection(x,y,z)
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
import boto3
|
||||
|
||||
def boto3connection(access_key,secret_key,bucketname):
|
||||
host='https://s3.fra1-2.cloudferro.com'
|
||||
s3=boto3.resource('s3',aws_access_key_id=access_key,
|
||||
aws_secret_access_key=secret_key, endpoint_url=host,)
|
||||
|
||||
bucket=s3.Bucket(bucketname)
|
||||
for obj in bucket.objects.filter():
|
||||
print('{0}:{1}'.format(bucket.name, obj.key))
|
||||
|
||||
#For Python3
|
||||
x = input('Enter your access key:')
|
||||
y = input('Enter your secret key:')
|
||||
z = input('Enter your bucket name:')
|
||||
|
||||
boto3connection(x,y,z)
|
||||
|
||||
```
|
||||
|
||||
Save your file with **.py** extension and execute the following command in the terminal:
|
||||
|
||||
```
|
||||
python3 <filename.py>
|
||||
|
||||
```
|
||||
|
||||
Enter the access key, secret key and bucket name. If everything is correct, you should see output in the following format: **<bucket\_name>:<file\_name>**.
|
||||
@ -0,0 +1,65 @@
|
||||
How to Delete Large S3 Bucket on CloudFerro Cloud[](#how-to-delete-large-s3-bucket-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================
|
||||
|
||||
**Introduction**
|
||||
|
||||
Due to an *openstack-cli* limitation, removing S3 buckets with more then 10 000 objects will fail when using the command:
|
||||
|
||||
```
|
||||
openstack container delete --recursive <<bucket_name>>
|
||||
|
||||
```
|
||||
|
||||
showing the following error error:
|
||||
|
||||
```
|
||||
Conflict (HTTP 409) (Request-ID: tx00000000000001bb5e8e5-006135c488-35bc5d520-dias_default) clean_up DeleteContainer: Conflict (HTTP 409) (Request-ID:)
|
||||
|
||||
```
|
||||
|
||||
**Recommended solution**
|
||||
|
||||
To delete a large S3 bucket we can use **s3cmd**.
|
||||
|
||||
In order to acquire access to your Object Storage buckets via s3cmd, first you have to generate your own EC2 credentials with the following tutorial: [How to generate and manage EC2 credentials on CloudFerro Cloud](../cloud/How-to-generate-ec2-credentials-on-CloudFerro-Cloud.html)
|
||||
|
||||
After that, you have to configure s3cmd as explained in the following article: [How to access private object storage using S3cmd or boto3 on CloudFerro Cloud](How-to-access-private-object-storage-using-S3cmd-or-boto3-on-CloudFerro-Cloud.html)
|
||||
|
||||
After this, you should be able to list and access your Object Storage.
|
||||
|
||||
List your buckets with the command:
|
||||
|
||||
```
|
||||
eouser@vm01:$ s3cmd ls
|
||||
2022-02-02 22:22 s3://large-bucket
|
||||
|
||||
```
|
||||
|
||||
Now you’re able to delete your large bucket with the command presented below, where **-r** means recursive removal.
|
||||
|
||||
```
|
||||
s3cmd rb -r s3://large-bucket
|
||||
|
||||
```
|
||||
|
||||
The bucket itself and all the files inside will be removed.
|
||||
|
||||
```
|
||||
WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
|
||||
delete: 's3://large-bucket/example_file.jpg'
|
||||
delete: 's3://large-bucket/example_file.txt'
|
||||
delete: 's3://large-bucket/example_file.png'
|
||||
...
|
||||
...
|
||||
...
|
||||
Bucket 's3://large-bucket/' removed
|
||||
|
||||
```
|
||||
|
||||
Your large bucket has been successfully removed and the list of buckets is empty.
|
||||
|
||||
```
|
||||
eouser@vm01:$ s3cmd ls
|
||||
eouser@vm01:$
|
||||
|
||||
```
|
||||
@ -0,0 +1,30 @@
|
||||
How to install s3cmd on Linux on CloudFerro Cloud[](#how-to-install-s3cmd-on-linux-on-brand-name "Permalink to this headline")
|
||||
===============================================================================================================================
|
||||
|
||||
In this article you will learn how to install [s3cmd](https://github.com/s3tools/s3cmd) on Linux. **s3cmd** can be used, among other things, to:
|
||||
|
||||
> * download files from EODATA repositories as well as to
|
||||
> * store files in object storage available on CloudFerro Cloud,
|
||||
|
||||
without mounting these resources as a file system.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Installing **s3cmd** using **apt**
|
||||
> * Uninstalling **s3cmd**
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **A virtual machine or local computer**
|
||||
|
||||
These instructions are for Ubuntu 22.04, either on a local computer or on a virtual machine hosted on CloudFerro Cloud cloud.
|
||||
|
||||
Other operating systems and environments are outside of scope of this article and might require adjusting of the instructions accordingly.
|
||||
|
||||
If you want to install **s3cmd** on a virtual machine hosted on CloudFerro Cloud cloud, follow one of these articles:
|
||||
@ -0,0 +1,279 @@
|
||||
How to Mount Object Storage Container as a File System in Linux Using s3fs on CloudFerro Cloud[](#how-to-mount-object-storage-container-as-a-file-system-in-linux-using-s3fs-on-brand-name "Permalink to this headline")
|
||||
=========================================================================================================================================================================================================================
|
||||
|
||||
The following article covers mounting of object storage containers using **s3fs** on Linux. One of possible use cases is having easy access to content of such containers on different computers and virtual machines. For access, you can use your local Linux computer or virtual machines running on CloudFerro Cloud cloud. All users of the operating system should have read, write and execute privileges on contents of these containers.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Installing **s3fs**
|
||||
> * Creating a file containing login credentials
|
||||
> * Creating a mounting point
|
||||
> * Mounting the container using **s3fs**
|
||||
> * Testing whether mounting was successful
|
||||
> * Dismount a container
|
||||
> * Configuring automatic mounting
|
||||
> * Stopping automatic mounting of a container
|
||||
> * Potential problems with the way s3fs handles objects
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2 **Machine running Linux**
|
||||
|
||||
You need a machine running Linux. It can be a virtual machine running on CloudFerro Cloud cloud or your local Linux computer.
|
||||
|
||||
This article was written for Ubuntu 22.04. If you are running a different distribution, adjust the commands from this article accordingly.
|
||||
|
||||
No. 3 **Object storage container**
|
||||
|
||||
You need at least one object storage container on CloudFerro Cloud cloud. The following article shows how to create one: [How to use Object Storage on CloudFerro Cloud](How-to-use-Object-Storage-on-CloudFerro-Cloud.html).
|
||||
|
||||
As a concrete example, let’s say that the container is named **my-files** and that it contains two items. This is what it could look like in the Horizon dashboard:
|
||||
|
||||

|
||||
|
||||
With the proper **s3fs** command from this article, you will be able to access that container remotely but through a local file system.
|
||||
|
||||
No. 4 **Generated EC2 credentials**
|
||||
|
||||
You need to have EC2 credentials for your object storage containers generated. The following article will tell you how to do it: [How to generate and manage EC2 credentials on CloudFerro Cloud](../cloud/How-to-generate-ec2-credentials-on-CloudFerro-Cloud.html).
|
||||
|
||||
No. 5 **Knowledge of the Linux command line**
|
||||
|
||||
Basic knowledge of the Linux command line is required.
|
||||
|
||||
Step 1: Sign in to your Linux machine[](#step-1-sign-in-to-your-linux-machine "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Sign in to an Ubuntu account which has **sudo** privileges. If you are using SSH to connect to a virtual machine running on CloudFerro Cloud cloud, the username will likely be **eouser**.
|
||||
|
||||
Step 2: Install s3fs[](#step-2-install-s3fs "Permalink to this headline")
|
||||
--------------------------------------------------------------------------
|
||||
|
||||
First, check if **s3fs** is installed on your machine. Enter the following command in the terminal:
|
||||
|
||||
```
|
||||
which s3fs
|
||||
|
||||
```
|
||||
|
||||
If **s3fs** is already installed, the output should contain its location, which could look like this:
|
||||
|
||||
```
|
||||
/usr/local/bin/s3fs
|
||||
|
||||
```
|
||||
|
||||
If the output is empty, it probably means that **s3fs** is not installed. Update your packages and install **s3fs** using this command:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade && sudo apt install s3fs
|
||||
|
||||
```
|
||||
|
||||
Step 3: Create file or files containing login credentials[](#step-3-create-file-or-files-containing-login-credentials "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
In this article, we are going to use plain text files for storing S3 credentials - access and secret keys. (If you don’t have the credentials yet, follow Prerequisite No. 4.) Each file can store one such pair and can be used to mount all object storage containers to which that key pair provides access.
|
||||
|
||||
For each key pair you intend to use, create file name and a corresponding text file. The content of the file will be just one line,
|
||||
|
||||
> * starting with access key,
|
||||
> * followed by a colon,
|
||||
> * followed by a secret key
|
||||
|
||||
from that key pair. If the access key is **1234abcd** and the secret key is **4321dcba**, the corresponding text file should have the following content:
|
||||
|
||||
```
|
||||
1234abcd:4321dcba
|
||||
|
||||
```
|
||||
|
||||
Change permissions of each file containing a key pair to **600**. If such a file is called **.passwd-s3fs** and is stored in your home directory, the command for changing its permissions would be:
|
||||
|
||||
```
|
||||
chmod 600 ~/.passwd-s3fs
|
||||
|
||||
```
|
||||
|
||||
Step 4: Create mount points[](#step-4-create-mount-points "Permalink to this headline")
|
||||
----------------------------------------------------------------------------------------
|
||||
|
||||
The files inside your object storage container should appear inside a folder of your choice. Such a folder will be called *mount point* in this article. You can use an empty folder from your file system for that purpose. You can also create new folder(s) to use as mount points.
|
||||
|
||||
To keep things tidy, let us use in this example a standard Linux folder called **/mnt**, which is what system administrators use to mount other file systems. For each container, use the usual **mkdir** command to create a subfolder of **/mnt**. For instance:
|
||||
|
||||
```
|
||||
sudo mkdir /mnt/mount-point
|
||||
|
||||
```
|
||||
|
||||
Step 5: Mount a container[](#step-5-mount-a-container "Permalink to this headline")
|
||||
------------------------------------------------------------------------------------
|
||||
|
||||
Here is a typical command to mount a container:
|
||||
|
||||
```
|
||||
sudo s3fs my-files /mnt/mount-point -o passwd_file=~/.passwd-s3fs -o url=https://s3.waw3-1.cloudferro.com -o endpoint="waw3-1" -o use_path_request_style -o umask=0000 -o allow_other
|
||||
|
||||
```
|
||||
|
||||
It goes without saying that you will need to change some of the parameters involved – but not all of them. Here is what to change and what to use as prescribed:
|
||||
|
||||
**Edit container name and mount point**
|
||||
|
||||
> * **my-files** is the name of the container
|
||||
> * **/mnt/mount-point** is a directory in Linux file system which will be the mount point for that container
|
||||
|
||||
**Edit key pair location** (note it starts with **-o**)
|
||||
|
||||
> * **-o passwd\_file** - location of the file with the key pair used for mounting that container
|
||||
|
||||
**Do not edit the following parameters** – just copy and paste verbatim
|
||||
|
||||
> * **-o url** - the endpoint URL address
|
||||
> * **-o endpoint** - the S3 region
|
||||
> * **-o use\_path\_request\_style** - parameter that fixes issues with certain characters (such as dots)
|
||||
> * **-o umask** - *umask* which describes permissions for accessing a container - in this case it is read, write and execute
|
||||
> * **-o allow\_other** - allows access to the container to all users on the system
|
||||
|
||||
Once you have executed the command, navigate to the directory in which you mounted the object storage container. If still using folder **/mnt/mount-point**, the command is:
|
||||
|
||||
```
|
||||
cd /mnt/mount-point
|
||||
|
||||
```
|
||||
|
||||
List contents of the container and be ready to wait a couple of seconds for the operation to be completed:
|
||||
|
||||
```
|
||||
ls
|
||||
|
||||
```
|
||||
|
||||
You should see files from your object storage container.
|
||||
|
||||
Suppose you mounted object storage container under **/mnt/mount-point**, which contains
|
||||
|
||||
> * a directory called **some-directory** and
|
||||
> * a file named **text-file.txt**.
|
||||
|
||||
This is what executing the **ls** command from the mount point could produce:
|
||||
|
||||

|
||||
|
||||
To mount multiple containers, repeat the **s3fs** command with relevant parameters, as many times as needed.
|
||||
|
||||
Unmounting a container[](#unmounting-a-container "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
To unmount a container, first make sure that the content of your object storage is not in use by any application on your system, including terminals and file managers. After that, execute the following command, replacing **/mnt/mount-point** with the mount point of your object storage container:
|
||||
|
||||
```
|
||||
sudo umount -lf /mnt/mount-point
|
||||
|
||||
```
|
||||
|
||||
Configuring automatic mounting of your object storage[](#configuring-automatic-mounting-of-your-object-storage "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Here is how to configure automatic mounting of your object storage containers after system startup.
|
||||
|
||||
Check the location under which **s3fs** is installed on your system:
|
||||
|
||||
```
|
||||
which s3fs
|
||||
|
||||
```
|
||||
|
||||
The output should contain the full location of the **s3fs** binary on your system. On Ubuntu virtual machines created using default images on CloudFerro Cloud cloud, it will likely be:
|
||||
|
||||
```
|
||||
/usr/local/bin/s3fs
|
||||
|
||||
```
|
||||
|
||||
Memorize or write it somewhere down, you will need it later.
|
||||
|
||||
Open the file **/etc/fstab** for editing. You will need **sudo** permissions for that. For example, if you wish to use **nano** for this purpose, execute the following command:
|
||||
|
||||
```
|
||||
sudo nano /etc/fstab
|
||||
|
||||
```
|
||||
|
||||
Append the following line to it:
|
||||
|
||||
```
|
||||
/usr/local/bin/s3fs#my-files /mnt/mount-point fuse passwd_file=/home/eouser/.passwd-s3fs,_netdev,allow_other,use_path_request_style,uid=0,umask=0000,mp_umask=0000,gid=0,url=https://s3.waw3-1.cloudferro.com,region=waw3-1 0 0
|
||||
|
||||
```
|
||||
|
||||
Replace the parameters from that line as follows:
|
||||
|
||||
> * **/usr/local/bin/s3fs** with the full location of **s3fs** binary you obtained previously
|
||||
> * **my-files** with the name of your object storage container
|
||||
> * **/mnt/mount-point** with the full location of the directory which you chose as a mount point
|
||||
> * **/home/eouser/.passwd-s3fs** with the full location of the file containing the key pair used to access your object storage container created in Step 3
|
||||
|
||||
Append such a line for every container you wish to have automatically mounted.
|
||||
|
||||
Reboot your VM and check whether the mounting was successful by navigating to each mount point and making sure that the files from those object storage containers are there.
|
||||
|
||||
Stopping automatic mounting of a container[](#stopping-automatic-mounting-of-a-container "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
If you no longer want your containers to be automatically mounted, first make sure that each of them is not in use by any application on your system, including terminals and file managers.
|
||||
|
||||
After that, unmount each container you wish to stop from automatic mounting. Execute the following command - replace **/mnt/mount-point** with the mount point of your first container - and repeat it for every other such container, if applicable.
|
||||
|
||||
```
|
||||
sudo umount /mnt/mount-point
|
||||
|
||||
```
|
||||
|
||||
Finally, modify the **/etc/fstab** file.
|
||||
|
||||
To do that, open that file in your favorite text editor with **sudo**. If your favorite text editor is **nano**, use this command:
|
||||
|
||||
```
|
||||
sudo nano /etc/fstab
|
||||
|
||||
```
|
||||
|
||||
Remove the lines responsible for automatic mounting of containers you no longer want to be automatically mounted. If you followed this article, these lines were added while following its Step 6.
|
||||
|
||||
Save the file and exit the text editor.
|
||||
|
||||
You can now reboot your virtual machine to check if the containers are indeed no longer being mounted.
|
||||
|
||||
Potential problems with the way s3fs handles objects[](#potential-problems-with-the-way-s3fs-handles-objects "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
**s3fs** attempts to translate object storage to a file system and most of the time is successful. However, sometimes it might not be possible. One of potential problems with **s3fs** comes from the fact that object storage allows a folder and file to share the same name in the same location, which is outright impossible in normal operating systems.
|
||||
|
||||
Here is a situation from the Horizon dashboard:
|
||||
|
||||

|
||||
|
||||
The first row contains an object named **item**, with size of 10 bytes. The second row has the object called **item** labeled in blue color, described as a folder. Both the “file” and the “folder” are represented in Horizon to look like regular file and regular folder from Linux or Windows – except that they are not. In **S3** terminology, the former is an object with the name ending in **item**, while the latter is an object with the name ending in **item/** (note it is ending with a slash). Since their names are different, they can coexist in object store based on **S3** standard.
|
||||
|
||||
When above mentioned location is accessed by **s3fs**, the **ls** command will return only the folder:
|
||||
|
||||

|
||||
|
||||
To prevent this issue, invent and use consistent file system conventions while utilizing object storage.
|
||||
|
||||
Another potential problem is that some changes to the object storage might not be immediately visible in file system created by **s3fs**. Wait a bit and double check to see whether that is the case.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
You can also access object storage from CloudFerro Cloud without mounting it as a file system.
|
||||
|
||||
Check the following articles for more information:
|
||||
@ -0,0 +1,27 @@
|
||||
How to mount object storage container from CloudFerro Cloud as file system on local Windows computer[](#how-to-mount-object-storage-container-from-brand-name-as-file-system-on-local-windows-computer "Permalink to this headline")
|
||||
=====================================================================================================================================================================================================================================
|
||||
|
||||
This article describes how to configure direct access to object storage containers from CloudFerro Cloud cloud in **This PC** window on your local Windows computer. Such containers will be mounted as network drives, for example:
|
||||
|
||||

|
||||
|
||||
You will configure mounting using an account which has administrative privileges obtained using UAC (User Account Control). After this process, the container should be also be available on accounts which do not have such administrative privileges.
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface <https://horizon.cloudferro.com>.
|
||||
|
||||
No. 2. **Object storage container**
|
||||
|
||||
You need at least one object storage container on the CloudFerro Cloud cloud. If you do not have one yet, please follow this article: [How to use Object Storage on CloudFerro Cloud](How-to-use-Object-Storage-on-CloudFerro-Cloud.html)
|
||||
|
||||
No. 3. **Generated EC2 Credentials**
|
||||
|
||||
You need to generate EC2 credentials for your account.
|
||||
|
||||
The following article contains information how to do it on Linux: [How to generate and manage EC2 credentials on CloudFerro Cloud](../cloud/How-to-generate-ec2-credentials-on-CloudFerro-Cloud.html).
|
||||
|
||||
If instead you want to do it on Windows, you will need to install the OpenStack CLI client first. Check one of these articles to learn more.
|
||||
244
docs/s3/How-to-use-Object-Storage-on-CloudFerro-Cloud.html.md
Normal file
244
docs/s3/How-to-use-Object-Storage-on-CloudFerro-Cloud.html.md
Normal file
@ -0,0 +1,244 @@
|
||||
How to use Object Storage on CloudFerro Cloud[](#how-to-use-object-storage-on-brand-name "Permalink to this headline")
|
||||
=======================================================================================================================
|
||||
|
||||
Object storage on CloudFerro Cloud cloud can be used to store your files in *containers*. In this article, you will create a basic container and perform basic operations on it, using a web browser.
|
||||
|
||||
What We Are Going To Cover[](#what-we-are-going-to-cover "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
> * Create a new object storage container
|
||||
> * Viewing the container
|
||||
> * Creating a new folder
|
||||
> * Navigating through folders
|
||||
> * Uploading a file
|
||||
> * Deleting files and folders from a container
|
||||
> * Enabling or disabling public access to object storage containers
|
||||
> * Using a public link
|
||||
|
||||
Prerequisites[](#prerequisites "Permalink to this headline")
|
||||
-------------------------------------------------------------
|
||||
|
||||
No. 1 **Account**
|
||||
|
||||
You need a CloudFerro Cloud hosting account with access to the Horizon interface: <https://horizon.cloudferro.com>.
|
||||
|
||||
Creating a new object storage container[](#creating-a-new-object-storage-container "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Login to the Horizon dashboard. Navigate to the following section: **Object Store > Containers**.
|
||||
|
||||
You should see a list of object storage containers. By default, it will be empty:
|
||||
|
||||

|
||||
|
||||
To create a new object storage container, click the  button. You should get the following form:
|
||||
|
||||

|
||||
|
||||
Enter the name of your choice for that container in the **Container Name** text field.
|
||||
|
||||
In general, bucket names should follow domain name constraints:
|
||||
|
||||
Warning
|
||||
|
||||
Bucket names must be unique.
|
||||
|
||||
Bucket names cannot be formatted as IP address.
|
||||
|
||||
Bucket names can be between 3 and 63 characters long.
|
||||
|
||||
Bucket names must not contain uppercase characters or underscores.
|
||||
|
||||
Bucket names must start with a lowercase letter or number.
|
||||
|
||||
Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number.
|
||||
|
||||
Bucket name cannot contain forward slashes (**/**).
|
||||
|
||||
Note
|
||||
|
||||
**Single-tenancy vs. multi-tenancy**
|
||||
|
||||
On CREODIAS WAW3-1, WAW3-2 and FRA1-2 clouds, **single tenancy** is enabled. This means that two object storage containers on it cannot have an identical name. Avoid using common names such as **storage** or **files**.
|
||||
|
||||
In this example, we will use the name **file-container** for our object storage container. Of course, your name should be different.
|
||||
|
||||
Section **Container Access** has two options:
|
||||
|
||||
**Public**
|
||||
: It will generate a link. Anyone who has it will be able to access files stored on that object storage container, even if not being a member of CloudFerro Cloud cloud.
|
||||
|
||||
**Not Public**
|
||||
: This will not generate a link explained above. The container will only be available from within your project unless you set a bucket sharing policy (not covered in this article).
|
||||
|
||||
Click on **Submit** and see new container in the list:
|
||||
|
||||

|
||||
|
||||
You may encounter the following error:
|
||||
|
||||

|
||||
|
||||
The reason for it might be that you are trying to create an object storage container which has the same name as another container. Try using a different name.
|
||||
|
||||
Viewing the container[](#viewing-the-container "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
To view the content of the container, click its name on the list:
|
||||
|
||||

|
||||
|
||||
You should see files in the container. Initially, it should be empty. You can now create folders and upload files to this container.
|
||||
|
||||
Creating a new folder[](#creating-a-new-folder "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
To create a new folder, click button: . You should get the following form:
|
||||
|
||||

|
||||
|
||||
Enter the name for your folder in **Folder Name** text field. If you use a forward slash, it will create a tree of folders. For example, if you wish to create a folder called **place1** and inside that folder another folder called **place2**, enter the following:
|
||||
|
||||
```
|
||||
place1/place2
|
||||
|
||||
```
|
||||
|
||||
Adding forward slash in the beginning of such directory structure is optional. The folders will be created relative to the directory you are currently in and not to the root directory of your object storage container.
|
||||
|
||||
Click **Create Folder** to confirm.
|
||||
|
||||
Navigating through folders[](#navigating-through-folders "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------
|
||||
|
||||
To navigate to another folder on your object storage container, click its name. Folder names will be written in blue and in the **Size** column, the word **Folder** will be shown.
|
||||
|
||||
Section above text field **Click here for filters or full text search** shows the folder you are currently in. It could, for example, look like this:
|
||||
|
||||

|
||||
|
||||
That would be directory **another-folder**, inside the **second-folder** directory, which, in turn, is inside the **first-folder** directory.
|
||||
|
||||
Click the name of the folder you want to go to.
|
||||
|
||||
Uploading a file[](#uploading-a-file "Permalink to this headline")
|
||||
-------------------------------------------------------------------
|
||||
|
||||
To upload a file to your object storage container, click the  button. You should get the following window:
|
||||
|
||||

|
||||
|
||||
Click **Browse…** to open the file browser which can be used to choose a file which you wish to upload. Its look will vary depending on the operating system you are using and other factors.
|
||||
|
||||
Once you have chosen the file, its name should be written in the **File** section, for example:
|
||||
|
||||

|
||||
|
||||
You can enter the name and location of the file in your object storage container in the **File Name** text field. This allows you to rename the file which you are uploading and to put it into a different folder. Forward slashes are used to specify the location of your object storage container in folder hierarchy relative to the folder you are currently in. If you enter a name of a folder which does not exist yet, it will be created.
|
||||
|
||||
If you do not enter anything into **File Name** text field, the file will be uploaded to the folder you are currently in and it will not be renamed.
|
||||
|
||||
Once you’re ready, click **Upload File**. If the upload was successful, you should receive this confirmation:
|
||||
|
||||

|
||||
|
||||
For example, let’s assume that you are in the root directory of your object storage container and you want to upload a file called **uploaded-file.txt** to the directory called **first-folder** located there. If that is the case, you should enter the following in the **File Name** text field:
|
||||
|
||||
```
|
||||
first-folder/uploaded-file.txt
|
||||
|
||||
```
|
||||
|
||||
Your file should then be uploaded to that directory:
|
||||
|
||||

|
||||
|
||||
Warning
|
||||
|
||||
Having two files or two folders of the same name in the same directory is impossible. Having a file and folder under the same name in the same directory (extension is considered part of the name here) may lead to problems so it is best to avoid it.
|
||||
|
||||
Deleting files and folders from a container[](#deleting-files-and-folders-from-a-container "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
### Deleting one file[](#deleting-one-file "Permalink to this headline")
|
||||
|
||||
To delete a file from container, open the drop-down menu next to the **Download** button.
|
||||
|
||||

|
||||
|
||||
Click **Delete**.
|
||||
|
||||
You should get the following request for confirmation:
|
||||
|
||||

|
||||
|
||||
Click **Delete** to confirm. Your file should be deleted.
|
||||
|
||||
### Deleting one folder[](#deleting-one-folder "Permalink to this headline")
|
||||
|
||||
If you want to delete a folder and its contents, click the  button next to it. You should get the similar request for confirmation as previously. Like before, click **Delete** to confirm.
|
||||
|
||||
### Deleting multiple files and/or folders[](#deleting-multiple-files-and-or-folders "Permalink to this headline")
|
||||
|
||||
If you want to delete multiple files and/or folders at the same time, use checkboxes on the left of the list to select the ones you want to remove, for example:
|
||||
|
||||

|
||||
|
||||
You can also select all files and folders on a page by clicking the checkbox above the folders:
|
||||
|
||||

|
||||
|
||||
To delete selected items, click the  button to the right of the button used to create new folders. In this case you should also get the similar request for confirmation. Click **Delete** to confirm.
|
||||
|
||||
Recommended number of files in your object storage containers[](#recommended-number-of-files-in-your-object-storage-containers "Permalink to this headline")
|
||||
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
It is recommended that you do not have more than 1 000 000 (one million) files and folders in one object storage container since it will make listing them inefficient. If you want to store a large number of files, use multiple object storage containers for that purpose.
|
||||
|
||||
Working with public object storage containers[](#working-with-public-object-storage-containers "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
### Enabling or disabling public access to object storage containers[](#enabling-or-disabling-public-access-to-object-storage-containers "Permalink to this headline")
|
||||
|
||||
During the creation of your object storage container you had an option to set whether it should be accessible by the public or not. If you wish to change that setting later, first find the name of the container you wish to modify in the container list.
|
||||
|
||||
The details about that object storage container should appear like on the screenshot below – if not, click on its name to make it appear:
|
||||
|
||||

|
||||
|
||||
Check or uncheck the **Public Access** checkbox depending on whether you wish to enable or disable such access.
|
||||
|
||||
If you enabled **Public Access**, a link to your object storage container will be provided.
|
||||
|
||||
### Using a public link[](#using-a-public-link "Permalink to this headline")
|
||||
|
||||
Once you have created a public link, enter it into the browser. You should see a list of all files and folders in your container, for example:
|
||||
|
||||

|
||||
|
||||
Forward slashes are being used as separators between files and folders in the directory paths.
|
||||
|
||||
If you want to download a file from the root directory of your container, add its name to the link:
|
||||
|
||||

|
||||
|
||||
In this example, Firefox was used to access the file called **second-upload-file.txt** in the **file-container** object storage container.
|
||||
|
||||
If you end a link for downloading a file with forward slash, it will download an empty file instead.
|
||||
|
||||
To share a link used to download a particular file in another folder, add full location and name of the file within folder structure:
|
||||
|
||||

|
||||
|
||||
In this example, a file called **another-uploaded-file.txt** in the directory called **second-folder** from the **file-container** object storage container was accessed.
|
||||
|
||||
Note that this method cannot be used to download folders.
|
||||
|
||||
Warning
|
||||
|
||||
If you share a link to one file from an object storage container, the recipient will be able to create download links for all other files on that object storage container. Obviously, this could be a security risk.
|
||||
|
||||
What To Do Next[](#what-to-do-next "Permalink to this headline")
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Now that you have created your object storage container you can mount it on the platform of your choice for easier access. There are many ways to do that, for instance:
|
||||
2998
docs/s3/S3-bucket-object-versioning-on-CloudFerro-Cloud.html.md
Normal file
2998
docs/s3/S3-bucket-object-versioning-on-CloudFerro-Cloud.html.md
Normal file
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,161 @@
|
||||
Server-Side Encryption with Customer-Managed Keys (SSE-C) on CloudFerro Cloud[](#server-side-encryption-with-customer-managed-keys-sse-c-on-brand-name "Permalink to this headline")
|
||||
=====================================================================================================================================================================================
|
||||
|
||||
Introduction[](#introduction "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
This guide explains how to encrypt your objects server-side with SSE-C.
|
||||
|
||||
Server-side encryption is a way to protecting data at rest. SSE encrypts only the object data. Using server-side encryption with customer-provided encryption keys (SSE-C) allows to set your own keys for encryption. Server manages the encryption as it writes to disks and decryption when you access your objects. The only thing that you must manage is to provide your own encryption keys.
|
||||
|
||||
SSE-C is working as on the moment of uploading an object. Server uses the encryption key you provide to apply AES-256 encryption to data and removes the encryption key from memory. To access the data again you must provide the same encryption key on the request. Server verifies whether provided key matches and then decrypts the object before returning the object data to you.
|
||||
|
||||
Requirements[](#requirements "Permalink to this headline")
|
||||
-----------------------------------------------------------
|
||||
|
||||
* A bucket ([How to use Object Storage on CloudFerro Cloud](How-to-use-Object-Storage-on-CloudFerro-Cloud.html))
|
||||
* A user with the required access rights on the bucket
|
||||
|
||||
* EC2 credentials ([How to generate and manage EC2 credentials on CloudFerro Cloud](../cloud/How-to-generate-ec2-credentials-on-CloudFerro-Cloud.html))
|
||||
* Have installed and configured aws
|
||||
|
||||
If you have not used aws before:
|
||||
|
||||
```
|
||||
$ sudo apt install awscli
|
||||
|
||||
```
|
||||
|
||||
Then:
|
||||
|
||||
```
|
||||
$ aws configure
|
||||
|
||||
AWS Access Key ID [None]: <your EC2 Access Key>
|
||||
AWS Secret Access Key [None]: <your EC2 Secret Key>
|
||||
Default region name [None]: <enter>
|
||||
Default output format [None]: <enter>
|
||||
|
||||
```
|
||||
|
||||
**SSE-C at a glance**
|
||||
|
||||
> * *Only HTTPS* S3 rejects any requests made over HTTP using SSE-C.
|
||||
> * If you send request erroneously using HTTP, for security you should discard the key and rotate as appropriate.
|
||||
> * The ETag in the response is not the MD5 of the object data.
|
||||
> * You are responsible for managing encryption keys and to which object they were used.
|
||||
> * If bucket is versioning-enabled, each object version can have its own encryption key.
|
||||
|
||||
Attention
|
||||
|
||||
If you lose encryption key it means the object is also lost. Our servers do not store encryption keys, so it is not possible to access the data again without them.
|
||||
|
||||
REST API[](#rest-api "Permalink to this headline")
|
||||
---------------------------------------------------
|
||||
|
||||
To encrypt or decrypt objects in SSE-C mode the following headers are required:
|
||||
|
||||
| Header | Type | Description |
|
||||
| --- | --- | --- |
|
||||
| x-amz-server-side-encryption-customer-algorithm | string | Encryption algorithm. Must be set to AES256 |
|
||||
| x-amz-server-side-encryption-customer-key | string | 256-bit base64-encoded encryption key used in the server-side encryption process |
|
||||
| x-amz-server-side-encryption-customer-key-MD5 | string | base64-encoded 128-bit MD5 digest of the encryption key according to [RFC 1321](https://www.rfc-editor.org/rfc/rfc1321). It is used to ensure that the encryption has not been corrupted during transport and encoding process. |
|
||||
|
||||
Note
|
||||
|
||||
MD5 digest of the key before base64 encoding.
|
||||
|
||||
Headers apply to the following API operations:
|
||||
|
||||
> * PutObject
|
||||
> * PostObject
|
||||
> * CopyObject (to target objects)
|
||||
> * HeadObject
|
||||
> * GetObject
|
||||
> * InitiateMultipartUpload
|
||||
> * UploadPart
|
||||
> * UploadPart-Copy (to target parts)
|
||||
|
||||
Example No 1 Generate header values[](#example-no-1-generate-header-values "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------------
|
||||
|
||||
```
|
||||
secret="32bytesOfTotallyRandomCharacters"
|
||||
key=$(echo -n $secret | base64)
|
||||
keymd5=$(echo -n $secret | openssl dgst -md5 -binary | base64)
|
||||
|
||||
```
|
||||
|
||||
OR
|
||||
|
||||
```
|
||||
openssl rand 32 > sse-c.key
|
||||
key=$(cat sse-c.key | base64)
|
||||
keymd5=$(cat sse-c.key | openssl dgst -md5 -binary | base64)
|
||||
|
||||
```
|
||||
|
||||
Example No 2 aws-cli (s3api)[](#example-no-2-aws-cli-s3api "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------------
|
||||
|
||||
Upload an object with SSE-C encryption enabled
|
||||
|
||||
```
|
||||
aws s3api put-object \
|
||||
--bucket bucket-name --key object-name \
|
||||
--body contents.txt \
|
||||
--sse-customer-algorithm AES256 \
|
||||
--sse-customer-key $key \
|
||||
--sse-customer-key-md5 $keymd5 \
|
||||
--endpoint-url https://s3.waw3-1.cloudferro.com
|
||||
|
||||
```
|
||||
|
||||
Example No 3 aws-cli (s3)[](#example-no-3-aws-cli-s3 "Permalink to this headline")
|
||||
-----------------------------------------------------------------------------------
|
||||
|
||||
```
|
||||
aws s3 cp file.txt s3://bucket-name/ \
|
||||
--sse-c-key $secret \
|
||||
--sse-c AES256 \
|
||||
--endpoint https://s3.waw3-1.cloudferro.com
|
||||
|
||||
```
|
||||
|
||||
Example No 4 aws-cli (s3 blob)[](#example-no-4-aws-cli-s3-blob "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------
|
||||
|
||||
```
|
||||
aws s3 cp file.txt s3://bucket/ \
|
||||
--sse-c-key fileb://sse-c.key \
|
||||
--sse-c AES256 \
|
||||
--endpoint https://s3.waw3-1.cloudferro.com
|
||||
|
||||
```
|
||||
|
||||
Note
|
||||
|
||||
At the moment **s3cmd** does not support SSE-C encryption.
|
||||
|
||||
Downloading the encrypted object[](#downloading-the-encrypted-object "Permalink to this headline")
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
```
|
||||
aws s3api get-object <file_name> --bucket <bucket_name> \
|
||||
--key <object_key> \
|
||||
--sse-customer-key $secret \
|
||||
--sse-customer-algorithm AES256 \
|
||||
--endpoint https://s3.waw3-1.cloudferro.com
|
||||
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
aws s3api get-object <file_name> --bucket <bucket_name> \
|
||||
--key <object_key> \
|
||||
--sse-customer-key fileb://<key_name> \
|
||||
--sse-customer-algorithm AES256 \
|
||||
--endpoint https://s3.waw3-1.cloudferro.com
|
||||
|
||||
```
|
||||
2
docs/s3/s3.html.md
Normal file
2
docs/s3/s3.html.md
Normal file
@ -0,0 +1,2 @@
|
||||
S3[](#s3 "Permalink to this headline")
|
||||
=======================================
|
||||
Reference in New Issue
Block a user