S3 object storage

Amazon Simple Storage Services (S3) is an object storage service that exposes a HTTP based API known as the S3 API. Binero cloud provides it’s own object storage service that supports this HTTP based S3 API.

You are able to consume our object storage using the S3 API from any of our Storage regions and availability zones, either from one availability zone at a time or both using replication.

A container holding objects is called a bucket in S3 terminology.

Note

The complete list of S3 features is available here

Note

See known limitations for more information on compatibility and interoperability between the S3 and Swift APIs.

Setting up credentials

Creating S3 credentials (which is not the same as any other credentials in the platform) can only be done using the OpenStack terminal client, follow these steps:

  • Run this command: openstack ec2 credentials create

In the response, you will get various information out of which the values from the fields access and secret should be saved (they are the credentials).

Important

The keys generated are not protected by access control within your project. This means that if you provide access to for instance developers to the project, they can read the keys.

S3 client

S3 is an API based protocol, meaning to administrate it you would use API requests. In order to simplify the management of S3 in the cloud, we recommend installing the official client from AWS, its available here. Once its installed, you should be able to run aws --version and get an output.

Configure your client with your credentials (from the “setting up credentials” step above) by creating the file (on a Linux or MacOS based computer) ~/.aws/credentials as such:

[default]
aws_access_key_id=[ACCESS_KEY]
aws_secret_access_key=[SECRET_KEY]

When the configuration file is completed, you are now able to test reaching the cloud by for instance running this command: aws --endpoint=https://object-eu-se-1a.binero.cloud s3api list-buckets which will list the buckets in the account from the non replicated endpoint of availability zone europe-se-1a. If you have no buckets setup, it will return an emtpy “Buckets” array.

Creating a bucket

To create a bucket via S3, you would either use the API (not covered in this documentation) or the S3 client. To use the latter to create a bucket, follow these steps:

  • Decide which storage policy you want to use. Save the name.

  • Decide whether or not to use replication.

  • Decide in what availability zone to store your data, save the name.

  • Based on replication (or not) as well as availability zone, choose the right endpoint. Save the endpoint URL.

  • Based on replication (or not) the LocationConstraint will be either europe-se-1 or europe-se-1-rep, save the one that is right for your use-case.

  • Run this command: aws --endpoint=[ENDPOINT_URL] s3api create-bucket --bucket [BUCKET_NAME] --create-bucket-configuration LocationConstraint=[LOCAL_CONSTRAINT]:[STORAGE_POLICY_NAME], replacing the items in angle brackets with the proper data from previous steps. The storage policy is optional and will use the default if not specified.

  • Verify by running this command: aws --endpoint=[ENDPOINT_URL] s3api list-buckets

You are now able to use your bucket to save data in using your credentials from your application.

Deleting a bucket

To delete a bucket using the aws client, follow these steps:

  • Run this command: aws --endpoint=[ENDPOINT_URL] s3api list-buckets, save the name of the bucket you want to delete.

  • Run this command: aws --endpoint=[ENDPOINT_URL] s3api delete-bucket --bucket [BUCKET_NAME], replace [BUCKET_NAME] with the name of the bucket.

Note

The delete will fail unless the bucket is empty.