Class RightAws::S3Interface
In: lib/s3/right_s3_interface.rb
Parent: RightAwsBase

Methods

Included Modules

RightAwsBaseInterface

Constants

USE_100_CONTINUE_PUT_SIZE = 1_000_000
DEFAULT_HOST = 's3.amazonaws.com'
DEFAULT_PORT = 443
DEFAULT_PROTOCOL = 'https'
DEFAULT_SERVICE = '/'
REQUEST_TTL = 30
DEFAULT_EXPIRES_AFTER = 1 * 24 * 60 * 60
ONE_YEAR_IN_SECONDS = 365 * 24 * 60 * 60
AMAZON_HEADER_PREFIX = 'x-amz-'
AMAZON_METADATA_PREFIX = 'x-amz-meta-'

Public Class methods

Creates new RightS3 instance.

 s3 = RightAws::S3Interface.new('1E3GDYEOGFJPIT7XXXXXX','hgTHt68JY07JKUY08ftHYtERkjgtfERn57XXXXXX', {:multi_thread => true, :logger => Logger.new('/tmp/x.log')}) #=> #<RightAws::S3Interface:0xb7b3c27c>

Params is a hash:

   {:server       => 's3.amazonaws.com'   # Amazon service host: 's3.amazonaws.com'(default)
    :port         => 443                  # Amazon service port: 80 or 443(default)
    :protocol     => 'https'              # Amazon service protocol: 'http' or 'https'(default)
    :multi_thread => true|false           # Multi-threaded (connection per each thread): true or false(default)
    :logger       => Logger Object}       # Logger instance: logs to STDOUT if omitted }

Public Instance methods

Retrieve bucket location

 s3.create_bucket('my-awesome-bucket-us')        #=> true
 puts s3.bucket_location('my-awesome-bucket-us') #=> '' (Amazon's default value assumed)

 s3.create_bucket('my-awesome-bucket-eu', :location => :eu) #=> true
 puts s3.bucket_location('my-awesome-bucket-eu')            #=> 'EU'

Removes all keys from bucket. Returns true or an exception.

 s3.clear_bucket('my_awesome_bucket') #=> true

Copy an object.

 directive: :copy    - copy meta-headers from source (default value)
            :replace - replace meta-headers by passed ones

 # copy a key with meta-headers
 s3.copy('b1', 'key1', 'b1', 'key1_copy') #=> {:e_tag=>"\"e8b...8d\"", :last_modified=>"2008-05-11T10:25:22.000Z"}

 # copy a key, overwrite meta-headers
 s3.copy('b1', 'key2', 'b1', 'key2_copy', :replace, 'x-amz-meta-family'=>'Woho555!') #=> {:e_tag=>"\"e8b...8d\"", :last_modified=>"2008-05-11T10:26:22.000Z"}

see: docs.amazonwebservices.com/AmazonS3/2006-03-01/UsingCopyingObjects.html

     http://docs.amazonwebservices.com/AmazonS3/2006-03-01/RESTObjectCOPY.html

Creates new bucket. Returns true or an exception.

 # create a bucket at American server
 s3.create_bucket('my-awesome-bucket-us') #=> true
 # create a bucket at European server
 s3.create_bucket('my-awesome-bucket-eu', :location => :eu) #=> true

Generates link for ‘CreateBucket’.

 s3.create_bucket_link('my_awesome_bucket') #=> url string

Deletes key. Returns true or an exception.

 s3.delete('my_awesome_bucket', 'log/curent/1.log') #=> true

Deletes new bucket. Bucket must be empty! Returns true or an exception.

 s3.delete_bucket('my_awesome_bucket')  #=> true

See also: force_delete_bucket method

Generates link for ‘DeleteBucket’.

 s3.delete_bucket_link('my_awesome_bucket') #=> url string

Deletes all keys where the ‘folder_key’ may be assumed as ‘folder’ name. Returns an array of string keys that have been deleted.

 s3.list_bucket('my_awesome_bucket').map{|key_data| key_data[:key]} #=> ['test','test/2/34','test/3','test1','test1/logs']
 s3.delete_folder('my_awesome_bucket','test')                       #=> ['test','test/2/34','test/3']

Generates link for ‘DeleteObject’.

 s3.delete_link('my_awesome_bucket',key) #=> url string

Deletes all keys in bucket then deletes bucket. Returns true or an exception.

 s3.force_delete_bucket('my_awesome_bucket')

Retrieves object data from Amazon. Returns a hash or an exception.

 s3.get('my_awesome_bucket', 'log/curent/1.log') #=>

     {:object  => "Ola-la!",
      :headers => {"last-modified"     => "Wed, 23 May 2007 09:08:04 GMT",
                   "content-type"      => "",
                   "etag"              => "\"000000000096f4ee74bc4596443ef2a4\"",
                   "date"              => "Wed, 23 May 2007 09:08:03 GMT",
                   "x-amz-id-2"        => "ZZZZZZZZZZZZZZZZZZZZ1HJXZoehfrS4QxcxTdNGldR7w/FVqblP50fU8cuIMLiu",
                   "x-amz-meta-family" => "Woho556!",
                   "x-amz-request-id"  => "0000000C246D770C",
                   "server"            => "AmazonS3",
                   "content-length"    => "7"}}

If a block is provided, yields incrementally to the block as the response is read. For large responses, this function is ideal as the response can be ‘streamed’. The hash containing header fields is still returned. Example: foo = File.new(’./chunder.txt’, File::CREAT|File::RDWR) rhdr = s3.get(‘aws-test’, ‘Cent5V1_7_1.img.part.00’) do |chunk|

  foo.write(chunk)

end foo.close

Retieves the ACL (access control policy) for a bucket or object. Returns a hash of headers and xml doc with ACL data. See: docs.amazonwebservices.com/AmazonS3/2006-03-01/RESTAccessPolicy.html.

 s3.get_acl('my_awesome_bucket', 'log/curent/1.log') #=>
   {:headers => {"x-amz-id-2"=>"B3BdDMDUz+phFF2mGBH04E46ZD4Qb9HF5PoPHqDRWBv+NVGeA3TOQ3BkVvPBjgxX",
                 "content-type"=>"application/xml;charset=ISO-8859-1",
                 "date"=>"Wed, 23 May 2007 09:40:16 GMT",
                 "x-amz-request-id"=>"B183FA7AB5FBB4DD",
                 "server"=>"AmazonS3",
                 "transfer-encoding"=>"chunked"},
    :object  => "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<AccessControlPolicy xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"><Owner>
                 <ID>16144ab2929314cc309ffe736daa2b264357476c7fea6efb2c3347ac3ab2792a</ID><DisplayName>root</DisplayName></Owner>
                 <AccessControlList><Grant><Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"CanonicalUser\"><ID>
                 16144ab2929314cc309ffe736daa2b264357476c7fea6efb2c3347ac3ab2792a</ID><DisplayName>root</DisplayName></Grantee>
                 <Permission>FULL_CONTROL</Permission></Grant></AccessControlList></AccessControlPolicy>" }

Generates link for ‘GetACL’.

 s3.get_acl_link('my_awesome_bucket',key) #=> url string

Retieves the ACL (access control policy) for a bucket or object. Returns a hash of {:owner, :grantees}

 s3.get_acl_parse('my_awesome_bucket', 'log/curent/1.log') #=>

 { :grantees=>
   { "16...2a"=>
     { :display_name=>"root",
       :permissions=>["FULL_CONTROL"],
       :attributes=>
        { "xsi:type"=>"CanonicalUser",
          "xmlns:xsi"=>"http://www.w3.org/2001/XMLSchema-instance"}},
    "http://acs.amazonaws.com/groups/global/AllUsers"=>
      { :display_name=>"AllUsers",
        :permissions=>["READ"],
        :attributes=>
         { "xsi:type"=>"Group",
           "xmlns:xsi"=>"http://www.w3.org/2001/XMLSchema-instance"}}},
  :owner=>
    { :id=>"16..2a",
      :display_name=>"root"}}

Retieves the ACL (access control policy) for a bucket. Returns a hash of headers and xml doc with ACL data.

Generates link for ‘GetBucketACL’.

 s3.get_acl_link('my_awesome_bucket',key) #=> url string

Generates link for ‘GetObject’.

if a bucket comply with virtual hosting naming then retuns a link with the bucket as a part of host name:

 s3.get_link('my-awesome-bucket',key) #=> https://my-awesome-bucket.s3.amazonaws.com:443/asia%2Fcustomers?Signature=nh7...

otherwise returns an old style link (the bucket is a part of path):

 s3.get_link('my_awesome_bucket',key) #=> https://s3.amazonaws.com:443/my_awesome_bucket/asia%2Fcustomers?Signature=QAO...

see docs.amazonwebservices.com/AmazonS3/2006-03-01/VirtualHosting.html

Retrieves the logging configuration for a bucket. Returns a hash of {:enabled, :targetbucket, :targetprefix}

s3.interface.get_logging_parse(:bucket => "asset_bucket")

  => {:enabled=>true, :targetbucket=>"mylogbucket", :targetprefix=>"loggylogs/"}

Retrieves object data only (headers are omitted). Returns string or an exception.

 s3.get('my_awesome_bucket', 'log/curent/1.log') #=> 'Ola-la!'

Retrieves object metadata. Returns a hash of http_response_headers.

 s3.head('my_awesome_bucket', 'log/curent/1.log') #=>
   {"last-modified"     => "Wed, 23 May 2007 09:08:04 GMT",
    "content-type"      => "",
    "etag"              => "\"000000000096f4ee74bc4596443ef2a4\"",
    "date"              => "Wed, 23 May 2007 09:08:03 GMT",
    "x-amz-id-2"        => "ZZZZZZZZZZZZZZZZZZZZ1HJXZoehfrS4QxcxTdNGldR7w/FVqblP50fU8cuIMLiu",
    "x-amz-meta-family" => "Woho556!",
    "x-amz-request-id"  => "0000000C246D770C",
    "server"            => "AmazonS3",
    "content-length"    => "7"}

Generates link for ‘HeadObject’.

 s3.head_link('my_awesome_bucket',key) #=> url string

Incrementally list the contents of a bucket. Yields the following hash to a block:

 s3.incrementally_list_bucket('my_awesome_bucket', { 'prefix'=>'t', 'marker'=>'', 'max-keys'=>5, delimiter=>'' }) yields
  {
    :name => 'bucketname',
    :prefix => 'subfolder/',
    :marker => 'fileN.jpg',
    :max_keys => 234,
    :delimiter => '/',
    :is_truncated => true,
    :next_marker => 'fileX.jpg',
    :contents => [
      { :key => "file1",
        :last_modified => "2007-05-18T07:00:59.000Z",
        :e_tag => "000000000059075b964b07152d234b70",
        :size => 3,
        :storage_class => "STANDARD",
        :owner_id => "00000000009314cc309ffe736daa2b264357476c7fea6efb2c3347ac3ab2792a",
        :owner_display_name => "root"
      }, { :key, ...}, ... {:key, ...}
    ]
    :common_prefixes => [
      "prefix1",
      "prefix2",
      ...,
      "prefixN"
    ]
  }

Returns an array of customer‘s buckets. Each item is a hash.

 s3.list_all_my_buckets #=>
   [{:owner_id           => "00000000009314cc309ffe736daa2b264357476c7fea6efb2c3347ac3ab2792a",
     :owner_display_name => "root",
     :name               => "bucket_name",
     :creation_date      => "2007-04-19T18:47:43.000Z"}, ..., {...}]

Generates link for ‘ListAllMyBuckets’.

 s3.list_all_my_buckets_link #=> url string

Returns an array of bucket‘s keys. Each array item (key data) is a hash.

 s3.list_bucket('my_awesome_bucket', { 'prefix'=>'t', 'marker'=>'', 'max-keys'=>5, delimiter=>'' }) #=>
   [{:key                => "test1",
     :last_modified      => "2007-05-18T07:00:59.000Z",
     :owner_id           => "00000000009314cc309ffe736daa2b264357476c7fea6efb2c3347ac3ab2792a",
     :owner_display_name => "root",
     :e_tag              => "000000000059075b964b07152d234b70",
     :storage_class      => "STANDARD",
     :size               => 3,
     :service=> {'is_truncated' => false,
                 'prefix'       => "t",
                 'marker'       => "",
                 'name'         => "my_awesome_bucket",
                 'max-keys'     => "5"}, ..., {...}]

Generates link for ‘ListBucket’.

 s3.list_bucket_link('my_awesome_bucket') #=> url string

Move an object.

 directive: :copy    - copy meta-headers from source (default value)
            :replace - replace meta-headers by passed ones

 # move bucket1/key1 to bucket1/key2
 s3.move('bucket1', 'key1', 'bucket1', 'key2') #=> {:e_tag=>"\"e8b...8d\"", :last_modified=>"2008-05-11T10:27:22.000Z"}

 # move bucket1/key1 to bucket2/key2 with new meta-headers assignment
 s3.copy('bucket1', 'key1', 'bucket2', 'key2', :replace, 'x-amz-meta-family'=>'Woho555!') #=> {:e_tag=>"\"e8b...8d\"", :last_modified=>"2008-05-11T10:28:22.000Z"}

get custom option

Saves object to Amazon. Returns true or an exception. Any header starting with AMAZON_METADATA_PREFIX is considered user metadata. It will be stored with the object and returned when you retrieve the object. The total size of the HTTP request, not including the body, must be less than 4 KB.

 s3.put('my_awesome_bucket', 'log/current/1.log', 'Ola-la!', 'x-amz-meta-family'=>'Woho556!') #=> true

This method is capable of ‘streaming’ uploads; that is, it can upload data from a file or other IO object without first reading all the data into memory. This is most useful for large PUTs - it is difficult to read a 2 GB file entirely into memory before sending it to S3. To stream an upload, pass an object that responds to ‘read’ (like the read method of IO) and to either ‘lstat’ or ‘size’. For files, this means streaming is enabled by simply making the call:

 s3.put(bucket_name, 'S3keyname.forthisfile',  File.open('localfilename.dat'))

If the IO object you wish to stream from responds to the read method but doesn‘t implement lstat or size, you can extend the object dynamically to implement these methods, or define your own class which defines these methods. Be sure that your class returns ‘nil’ from read() after having read ‘size’ bytes. Otherwise S3 will drop the socket after ‘Content-Length’ bytes have been uploaded, and HttpConnection will interpret this as an error.

This method now supports very large PUTs, where very large is > 2 GB.

For Win32 users: Files and IO objects should be opened in binary mode. If a text mode IO object is passed to PUT, it will be converted to binary mode.

Sets the ACL on a bucket or object.

Generates link for ‘PutACL’.

 s3.put_acl_link('my_awesome_bucket',key) #=> url string

Generates link for ‘PutBucketACL’.

 s3.put_acl_link('my_awesome_bucket',key) #=> url string

Generates link for ‘PutObject’.

 s3.put_link('my_awesome_bucket',key, object) #=> url string

Sets logging configuration for a bucket from the XML configuration document.

  params:
   :bucket
   :xmldoc

Rename an object.

 # rename bucket1/key1 to bucket1/key2
 s3.rename('bucket1', 'key1', 'key2') #=> {:e_tag=>"\"e8b...8d\"", :last_modified=>"2008-05-11T10:29:22.000Z"}

New experimental API for retrieving objects, introduced in RightAws 1.8.1. retrieve_object is similar in function to the older function get. It allows for optional verification of object md5 checksums on retrieval. Parameters are passed as hash entries and are checked for completeness as well as for spurious arguments.

If the optional :md5 argument is provided, retrieve_object verifies that the given md5 matches the md5 returned by S3. The :verified_md5 field in the response hash is set true or false depending on the outcome of this check. If no :md5 argument is given, :verified_md5 will be false in the response.

The optional argument of :headers allows the caller to specify arbitrary request header values. Mandatory arguments:

  :bucket - the bucket in which the object is stored
  :key    - the object address (or path) within the bucket

Optional arguments:

  :headers - hash of additional HTTP headers to include with the request
  :md5     - MD5 checksum against which to verify the retrieved object

 s3.retrieve_object(:bucket => "foobucket", :key => "foo")
   => {:verified_md5=>false,
       :headers=>{"last-modified"=>"Mon, 29 Sep 2008 18:58:56 GMT",
                  "x-amz-id-2"=>"2Aj3TDz6HP5109qly//18uHZ2a1TNHGLns9hyAtq2ved7wmzEXDOPGRHOYEa3Qnp",
                  "content-type"=>"",
                  "etag"=>"\"a507841b1bc8115094b00bbe8c1b2954\"",
                  "date"=>"Tue, 30 Sep 2008 00:52:44 GMT",
                  "x-amz-request-id"=>"EE4855DE27A2688C",
                  "server"=>"AmazonS3",
                  "content-length"=>"10"},
       :object=>"polemonium"}

 s3.retrieve_object(:bucket => "foobucket", :key => "foo", :md5=>'a507841b1bc8115094b00bbe8c1b2954')
   => {:verified_md5=>true,
       :headers=>{"last-modified"=>"Mon, 29 Sep 2008 18:58:56 GMT",
                  "x-amz-id-2"=>"mLWQcI+VuKVIdpTaPXEo84g0cz+vzmRLbj79TS8eFPfw19cGFOPxuLy4uGYVCvdH",
                  "content-type"=>"", "etag"=>"\"a507841b1bc8115094b00bbe8c1b2954\"",
                  "date"=>"Tue, 30 Sep 2008 00:53:08 GMT",
                  "x-amz-request-id"=>"6E7F317356580599",
                  "server"=>"AmazonS3",
                  "content-length"=>"10"},
       :object=>"polemonium"}

If a block is provided, yields incrementally to the block as the response is read. For large responses, this function is ideal as the response can be ‘streamed’. The hash containing header fields is still returned.

Identical in function to retrieve_object, but requires verification that the returned ETag is identical to the checksum passed in by the user as the ‘md5’ argument. If the check passes, returns the response metadata with the "verified_md5" field set true. Raises an exception if the checksums conflict. This call is implemented as a wrapper around retrieve_object and the user may gain different semantics by creating a custom wrapper.

New experimental API for uploading objects, introduced in RightAws 1.8.1. store_object is similar in function to the older function put, but returns the full response metadata. It also allows for optional verification of object md5 checksums on upload. Parameters are passed as hash entries and are checked for completeness as well as for spurious arguments. The hash of the response headers contains useful information like the Amazon request ID and the object ETag (MD5 checksum).

If the optional :md5 argument is provided, store_object verifies that the given md5 matches the md5 returned by S3. The :verified_md5 field in the response hash is set true or false depending on the outcome of this check. If no :md5 argument is given, :verified_md5 will be false in the response.

The optional argument of :headers allows the caller to specify arbitrary request header values.

s3.store_object(:bucket => "foobucket", :key => "foo", :md5 => "a507841b1bc8115094b00bbe8c1b2954", :data => "polemonium" )

  => {"x-amz-id-2"=>"SVsnS2nfDaR+ixyJUlRKM8GndRyEMS16+oZRieamuL61pPxPaTuWrWtlYaEhYrI/",
      "etag"=>"\"a507841b1bc8115094b00bbe8c1b2954\"",
      "date"=>"Mon, 29 Sep 2008 18:57:46 GMT",
      :verified_md5=>true,
      "x-amz-request-id"=>"63916465939995BA",
      "server"=>"AmazonS3",
      "content-length"=>"0"}

s3.store_object(:bucket => "foobucket", :key => "foo", :data => "polemonium" )

  => {"x-amz-id-2"=>"MAt9PLjgLX9UYJ5tV2fI/5dBZdpFjlzRVpWgBDpvZpl+V+gJFcBMW2L+LBstYpbR",
      "etag"=>"\"a507841b1bc8115094b00bbe8c1b2954\"",
      "date"=>"Mon, 29 Sep 2008 18:58:56 GMT",
      :verified_md5=>false,
      "x-amz-request-id"=>"3B25A996BC2CDD3B",
      "server"=>"AmazonS3",
      "content-length"=>"0"}

Identical in function to store_object, but requires verification that the returned ETag is identical to the checksum passed in by the user as the ‘md5’ argument. If the check passes, returns the response metadata with the "verified_md5" field set true. Raises an exception if the checksums conflict. This call is implemented as a wrapper around store_object and the user may gain different semantics by creating a custom wrapper.

s3.store_object_and_verify(:bucket => "foobucket", :key => "foo", :md5 => "a507841b1bc8115094b00bbe8c1b2954", :data => "polemonium" )

  => {"x-amz-id-2"=>"IZN3XsH4FlBU0+XYkFTfHwaiF1tNzrm6dIW2EM/cthKvl71nldfVC0oVQyydzWpb",
      "etag"=>"\"a507841b1bc8115094b00bbe8c1b2954\"",
      "date"=>"Mon, 29 Sep 2008 18:38:32 GMT",
      :verified_md5=>true,
      "x-amz-request-id"=>"E8D7EA4FE00F5DF7",
      "server"=>"AmazonS3",
      "content-length"=>"0"}

s3.store_object_and_verify(:bucket => "foobucket", :key => "foo", :md5 => "a507841b1bc8115094b00bbe8c1b2953", :data => "polemonium" )

  RightAws::AwsError: Uploaded object failed MD5 checksum verification: {"x-amz-id-2"=>"HTxVtd2bf7UHHDn+WzEH43MkEjFZ26xuYvUzbstkV6nrWvECRWQWFSx91z/bl03n",
                                                                         "etag"=>"\"a507841b1bc8115094b00bbe8c1b2954\"",
                                                                         "date"=>"Mon, 29 Sep 2008 18:38:41 GMT",
                                                                         :verified_md5=>false,
                                                                         "x-amz-request-id"=>"0D7ADE09F42606F2",
                                                                         "server"=>"AmazonS3",
                                                                         "content-length"=>"0"}

[Validate]