Create Tar archive from directory on S3 using AWS Lambda











up vote
5
down vote

favorite
2












I need to extract a bunch of zip files stored on s3 and add them to a tar archive and store that archive on s3. it is likely that that the sum of the zip files will greater than the 512mb local storage allowed from lambda functions. I have a partial souldtion that gets the objects from s3 extracts them and puts them in a s3 object without using the lambda local storage.



Extract object Thread



public class ExtractObject implements Runnable{

private String objectName;
private String uuid;
private final byte buffer = new byte[1024];

public ExtractAdvert(String name, String uuid) {
this.objectName= name;
this.uuid= uuid;
}

@Override
public void run() {
final String srcBucket = "my-bucket-name";
final AmazonS3 s3Client = new AmazonS3Client();

try {
S3Object s3Object = s3Client.getObject(new GetObjectRequest(srcBucket, objectName));
ZipInputStream zis = new ZipInputStream(s3Object.getObjectContent());
ZipEntry entry = zis.getNextEntry();

while(entry != null) {
String fileName = entry.getName();
String mimeType = FileMimeType.fromExtension(FilenameUtils.getExtension(fileName)).mimeType();
System.out.println("Extracting " + fileName + ", compressed: " + entry.getCompressedSize() + " bytes, extracted: " + entry.getSize() + " bytes, mimetype: " + mimeType);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
int len;
while ((len = zis.read(buffer)) > 0) {
outputStream.write(buffer, 0, len);
}
InputStream is = new ByteArrayInputStream(outputStream.toByteArray());
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(outputStream.size());
meta.setContentType(mimeType);
System.out.println("##### " + srcBucket + ", " + FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName);

// Add this to tar archive instead of putting back to s3
s3Client.putObject(srcBucket, FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName, is, meta);
is.close();
outputStream.close();
entry = zis.getNextEntry();
}
zis.closeEntry();
zis.close();
} catch (IOException ioe) {
System.out.println(ioe.getMessage());
}
}
}




this runs for each object that needs to be extracted and saves them in a s3 object in the structure needed for the tar file.



I think what i need is instead of putting the object back to s3 is to keep it in memory and add it to a tar archive. and upload that but after a lot of looking around and trial and error i have not created a successful tar file.
The main issue is i can't use the tmp directory in lambda.





Edit
should i be creating the tar file as i go instead of putting objects to s3? (see comment // Add this to tar archive instead of putting back to s3)
if so how do i create a tar stream without a storing it locally?





EDIT 2: Attempt at taring the files



ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result result;

ByteArrayOutputStream baos = new ByteArrayOutputStream();
TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);

do {
result = s3Client.listObjectsV2(req);

for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {

if(objectSummary.getKey().startsWith("tmp/") ) {
System.out.printf(" - %s (size: %d)n", objectSummary.getKey(), objectSummary.getSize());
S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
InputStream is = s3Object.getObjectContent();
System.out.println("Pre Create entry");
TarArchiveEntry archiveEntry = new TarArchiveEntry(IOUtils.toByteArray(is));
// Getting following exception above
// IllegalArgumentException: Invalid byte 111 at offset 7 in ' positio' len=8
System.out.println("Pre put entry");
tarOut.putArchiveEntry(archiveEntry);
System.out.println("Post put entry");
}
}

String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());

ObjectMetadata metadata = new ObjectMetadata();
InputStream is = new ByteArrayInputStream(baos.toByteArray());
s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + "tar-file", is, metadata));









share|improve this question

















This question has an open bounty worth +50
reputation from Lonergan6275 ending in 5 days.


This question has not received enough attention.




















    up vote
    5
    down vote

    favorite
    2












    I need to extract a bunch of zip files stored on s3 and add them to a tar archive and store that archive on s3. it is likely that that the sum of the zip files will greater than the 512mb local storage allowed from lambda functions. I have a partial souldtion that gets the objects from s3 extracts them and puts them in a s3 object without using the lambda local storage.



    Extract object Thread



    public class ExtractObject implements Runnable{

    private String objectName;
    private String uuid;
    private final byte buffer = new byte[1024];

    public ExtractAdvert(String name, String uuid) {
    this.objectName= name;
    this.uuid= uuid;
    }

    @Override
    public void run() {
    final String srcBucket = "my-bucket-name";
    final AmazonS3 s3Client = new AmazonS3Client();

    try {
    S3Object s3Object = s3Client.getObject(new GetObjectRequest(srcBucket, objectName));
    ZipInputStream zis = new ZipInputStream(s3Object.getObjectContent());
    ZipEntry entry = zis.getNextEntry();

    while(entry != null) {
    String fileName = entry.getName();
    String mimeType = FileMimeType.fromExtension(FilenameUtils.getExtension(fileName)).mimeType();
    System.out.println("Extracting " + fileName + ", compressed: " + entry.getCompressedSize() + " bytes, extracted: " + entry.getSize() + " bytes, mimetype: " + mimeType);
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    int len;
    while ((len = zis.read(buffer)) > 0) {
    outputStream.write(buffer, 0, len);
    }
    InputStream is = new ByteArrayInputStream(outputStream.toByteArray());
    ObjectMetadata meta = new ObjectMetadata();
    meta.setContentLength(outputStream.size());
    meta.setContentType(mimeType);
    System.out.println("##### " + srcBucket + ", " + FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName);

    // Add this to tar archive instead of putting back to s3
    s3Client.putObject(srcBucket, FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName, is, meta);
    is.close();
    outputStream.close();
    entry = zis.getNextEntry();
    }
    zis.closeEntry();
    zis.close();
    } catch (IOException ioe) {
    System.out.println(ioe.getMessage());
    }
    }
    }




    this runs for each object that needs to be extracted and saves them in a s3 object in the structure needed for the tar file.



    I think what i need is instead of putting the object back to s3 is to keep it in memory and add it to a tar archive. and upload that but after a lot of looking around and trial and error i have not created a successful tar file.
    The main issue is i can't use the tmp directory in lambda.





    Edit
    should i be creating the tar file as i go instead of putting objects to s3? (see comment // Add this to tar archive instead of putting back to s3)
    if so how do i create a tar stream without a storing it locally?





    EDIT 2: Attempt at taring the files



    ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
    ListObjectsV2Result result;

    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);

    do {
    result = s3Client.listObjectsV2(req);

    for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {

    if(objectSummary.getKey().startsWith("tmp/") ) {
    System.out.printf(" - %s (size: %d)n", objectSummary.getKey(), objectSummary.getSize());
    S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
    InputStream is = s3Object.getObjectContent();
    System.out.println("Pre Create entry");
    TarArchiveEntry archiveEntry = new TarArchiveEntry(IOUtils.toByteArray(is));
    // Getting following exception above
    // IllegalArgumentException: Invalid byte 111 at offset 7 in ' positio' len=8
    System.out.println("Pre put entry");
    tarOut.putArchiveEntry(archiveEntry);
    System.out.println("Post put entry");
    }
    }

    String token = result.getNextContinuationToken();
    System.out.println("Next Continuation Token: " + token);
    req.setContinuationToken(token);
    } while (result.isTruncated());

    ObjectMetadata metadata = new ObjectMetadata();
    InputStream is = new ByteArrayInputStream(baos.toByteArray());
    s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + "tar-file", is, metadata));









    share|improve this question

















    This question has an open bounty worth +50
    reputation from Lonergan6275 ending in 5 days.


    This question has not received enough attention.


















      up vote
      5
      down vote

      favorite
      2









      up vote
      5
      down vote

      favorite
      2






      2





      I need to extract a bunch of zip files stored on s3 and add them to a tar archive and store that archive on s3. it is likely that that the sum of the zip files will greater than the 512mb local storage allowed from lambda functions. I have a partial souldtion that gets the objects from s3 extracts them and puts them in a s3 object without using the lambda local storage.



      Extract object Thread



      public class ExtractObject implements Runnable{

      private String objectName;
      private String uuid;
      private final byte buffer = new byte[1024];

      public ExtractAdvert(String name, String uuid) {
      this.objectName= name;
      this.uuid= uuid;
      }

      @Override
      public void run() {
      final String srcBucket = "my-bucket-name";
      final AmazonS3 s3Client = new AmazonS3Client();

      try {
      S3Object s3Object = s3Client.getObject(new GetObjectRequest(srcBucket, objectName));
      ZipInputStream zis = new ZipInputStream(s3Object.getObjectContent());
      ZipEntry entry = zis.getNextEntry();

      while(entry != null) {
      String fileName = entry.getName();
      String mimeType = FileMimeType.fromExtension(FilenameUtils.getExtension(fileName)).mimeType();
      System.out.println("Extracting " + fileName + ", compressed: " + entry.getCompressedSize() + " bytes, extracted: " + entry.getSize() + " bytes, mimetype: " + mimeType);
      ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
      int len;
      while ((len = zis.read(buffer)) > 0) {
      outputStream.write(buffer, 0, len);
      }
      InputStream is = new ByteArrayInputStream(outputStream.toByteArray());
      ObjectMetadata meta = new ObjectMetadata();
      meta.setContentLength(outputStream.size());
      meta.setContentType(mimeType);
      System.out.println("##### " + srcBucket + ", " + FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName);

      // Add this to tar archive instead of putting back to s3
      s3Client.putObject(srcBucket, FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName, is, meta);
      is.close();
      outputStream.close();
      entry = zis.getNextEntry();
      }
      zis.closeEntry();
      zis.close();
      } catch (IOException ioe) {
      System.out.println(ioe.getMessage());
      }
      }
      }




      this runs for each object that needs to be extracted and saves them in a s3 object in the structure needed for the tar file.



      I think what i need is instead of putting the object back to s3 is to keep it in memory and add it to a tar archive. and upload that but after a lot of looking around and trial and error i have not created a successful tar file.
      The main issue is i can't use the tmp directory in lambda.





      Edit
      should i be creating the tar file as i go instead of putting objects to s3? (see comment // Add this to tar archive instead of putting back to s3)
      if so how do i create a tar stream without a storing it locally?





      EDIT 2: Attempt at taring the files



      ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
      ListObjectsV2Result result;

      ByteArrayOutputStream baos = new ByteArrayOutputStream();
      TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);

      do {
      result = s3Client.listObjectsV2(req);

      for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {

      if(objectSummary.getKey().startsWith("tmp/") ) {
      System.out.printf(" - %s (size: %d)n", objectSummary.getKey(), objectSummary.getSize());
      S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
      InputStream is = s3Object.getObjectContent();
      System.out.println("Pre Create entry");
      TarArchiveEntry archiveEntry = new TarArchiveEntry(IOUtils.toByteArray(is));
      // Getting following exception above
      // IllegalArgumentException: Invalid byte 111 at offset 7 in ' positio' len=8
      System.out.println("Pre put entry");
      tarOut.putArchiveEntry(archiveEntry);
      System.out.println("Post put entry");
      }
      }

      String token = result.getNextContinuationToken();
      System.out.println("Next Continuation Token: " + token);
      req.setContinuationToken(token);
      } while (result.isTruncated());

      ObjectMetadata metadata = new ObjectMetadata();
      InputStream is = new ByteArrayInputStream(baos.toByteArray());
      s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + "tar-file", is, metadata));









      share|improve this question















      I need to extract a bunch of zip files stored on s3 and add them to a tar archive and store that archive on s3. it is likely that that the sum of the zip files will greater than the 512mb local storage allowed from lambda functions. I have a partial souldtion that gets the objects from s3 extracts them and puts them in a s3 object without using the lambda local storage.



      Extract object Thread



      public class ExtractObject implements Runnable{

      private String objectName;
      private String uuid;
      private final byte buffer = new byte[1024];

      public ExtractAdvert(String name, String uuid) {
      this.objectName= name;
      this.uuid= uuid;
      }

      @Override
      public void run() {
      final String srcBucket = "my-bucket-name";
      final AmazonS3 s3Client = new AmazonS3Client();

      try {
      S3Object s3Object = s3Client.getObject(new GetObjectRequest(srcBucket, objectName));
      ZipInputStream zis = new ZipInputStream(s3Object.getObjectContent());
      ZipEntry entry = zis.getNextEntry();

      while(entry != null) {
      String fileName = entry.getName();
      String mimeType = FileMimeType.fromExtension(FilenameUtils.getExtension(fileName)).mimeType();
      System.out.println("Extracting " + fileName + ", compressed: " + entry.getCompressedSize() + " bytes, extracted: " + entry.getSize() + " bytes, mimetype: " + mimeType);
      ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
      int len;
      while ((len = zis.read(buffer)) > 0) {
      outputStream.write(buffer, 0, len);
      }
      InputStream is = new ByteArrayInputStream(outputStream.toByteArray());
      ObjectMetadata meta = new ObjectMetadata();
      meta.setContentLength(outputStream.size());
      meta.setContentType(mimeType);
      System.out.println("##### " + srcBucket + ", " + FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName);

      // Add this to tar archive instead of putting back to s3
      s3Client.putObject(srcBucket, FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName, is, meta);
      is.close();
      outputStream.close();
      entry = zis.getNextEntry();
      }
      zis.closeEntry();
      zis.close();
      } catch (IOException ioe) {
      System.out.println(ioe.getMessage());
      }
      }
      }




      this runs for each object that needs to be extracted and saves them in a s3 object in the structure needed for the tar file.



      I think what i need is instead of putting the object back to s3 is to keep it in memory and add it to a tar archive. and upload that but after a lot of looking around and trial and error i have not created a successful tar file.
      The main issue is i can't use the tmp directory in lambda.





      Edit
      should i be creating the tar file as i go instead of putting objects to s3? (see comment // Add this to tar archive instead of putting back to s3)
      if so how do i create a tar stream without a storing it locally?





      EDIT 2: Attempt at taring the files



      ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
      ListObjectsV2Result result;

      ByteArrayOutputStream baos = new ByteArrayOutputStream();
      TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);

      do {
      result = s3Client.listObjectsV2(req);

      for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {

      if(objectSummary.getKey().startsWith("tmp/") ) {
      System.out.printf(" - %s (size: %d)n", objectSummary.getKey(), objectSummary.getSize());
      S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
      InputStream is = s3Object.getObjectContent();
      System.out.println("Pre Create entry");
      TarArchiveEntry archiveEntry = new TarArchiveEntry(IOUtils.toByteArray(is));
      // Getting following exception above
      // IllegalArgumentException: Invalid byte 111 at offset 7 in ' positio' len=8
      System.out.println("Pre put entry");
      tarOut.putArchiveEntry(archiveEntry);
      System.out.println("Post put entry");
      }
      }

      String token = result.getNextContinuationToken();
      System.out.println("Next Continuation Token: " + token);
      req.setContinuationToken(token);
      } while (result.isTruncated());

      ObjectMetadata metadata = new ObjectMetadata();
      InputStream is = new ByteArrayInputStream(baos.toByteArray());
      s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + "tar-file", is, metadata));






      java aws-lambda inputstream aws-sdk tar






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 16 at 16:19

























      asked Nov 12 at 15:14









      Lonergan6275

      67921748




      67921748






      This question has an open bounty worth +50
      reputation from Lonergan6275 ending in 5 days.


      This question has not received enough attention.








      This question has an open bounty worth +50
      reputation from Lonergan6275 ending in 5 days.


      This question has not received enough attention.


























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote













          I have found a solution to this and it very similar to my attempt in Edit 2 above.



          private final String bucketName = "bucket-name";
          private final String bucketFolder = "tmp/";
          private final String tarKey = "tar-dir/tared-file.tar";

          private void createTar() throws IOException, ArchiveException {
          ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
          ListObjectsV2Result result;

          ByteArrayOutputStream baos = new ByteArrayOutputStream();
          TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);

          do {
          result = s3Client.listObjectsV2(req);

          for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
          if (objectSummary.getKey().startsWith(bucketFolder)) {
          S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
          InputStream is = s3Object.getObjectContent();

          String s3Key = objectSummary.getKey();
          String tarPath = s3Key.substring(s3Key.indexOf('/') + 1, s3Key.length());
          s3Key.lastIndexOf('.'));

          byte ba = IOUtils.toByteArray(is);

          TarArchiveEntry archiveEntry = new TarArchiveEntry(tarPath);
          archiveEntry.setSize(ba.length);
          tarOut.putArchiveEntry(archiveEntry);
          tarOut.write(ba);
          tarOut.closeArchiveEntry();
          }
          }

          String token = result.getNextContinuationToken();
          System.out.println("Next Continuation Token: " + token);
          req.setContinuationToken(token);
          } while (result.isTruncated());

          ObjectMetadata metadata = new ObjectMetadata();
          InputStream is = baos.toInputStream();
          metadata.setContentLength(baos.size());
          s3Client.putObject(new PutObjectRequest(bucketName, tarKey, is, metadata));
          }





          share|improve this answer





















          • That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
            – Trinopoty
            14 hours ago










          • @Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
            – Lonergan6275
            13 hours ago










          • I'll try to come up with working code.
            – Trinopoty
            11 hours ago











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














           

          draft saved


          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53265055%2fcreate-tar-archive-from-directory-on-s3-using-aws-lambda%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote













          I have found a solution to this and it very similar to my attempt in Edit 2 above.



          private final String bucketName = "bucket-name";
          private final String bucketFolder = "tmp/";
          private final String tarKey = "tar-dir/tared-file.tar";

          private void createTar() throws IOException, ArchiveException {
          ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
          ListObjectsV2Result result;

          ByteArrayOutputStream baos = new ByteArrayOutputStream();
          TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);

          do {
          result = s3Client.listObjectsV2(req);

          for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
          if (objectSummary.getKey().startsWith(bucketFolder)) {
          S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
          InputStream is = s3Object.getObjectContent();

          String s3Key = objectSummary.getKey();
          String tarPath = s3Key.substring(s3Key.indexOf('/') + 1, s3Key.length());
          s3Key.lastIndexOf('.'));

          byte ba = IOUtils.toByteArray(is);

          TarArchiveEntry archiveEntry = new TarArchiveEntry(tarPath);
          archiveEntry.setSize(ba.length);
          tarOut.putArchiveEntry(archiveEntry);
          tarOut.write(ba);
          tarOut.closeArchiveEntry();
          }
          }

          String token = result.getNextContinuationToken();
          System.out.println("Next Continuation Token: " + token);
          req.setContinuationToken(token);
          } while (result.isTruncated());

          ObjectMetadata metadata = new ObjectMetadata();
          InputStream is = baos.toInputStream();
          metadata.setContentLength(baos.size());
          s3Client.putObject(new PutObjectRequest(bucketName, tarKey, is, metadata));
          }





          share|improve this answer





















          • That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
            – Trinopoty
            14 hours ago










          • @Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
            – Lonergan6275
            13 hours ago










          • I'll try to come up with working code.
            – Trinopoty
            11 hours ago















          up vote
          1
          down vote













          I have found a solution to this and it very similar to my attempt in Edit 2 above.



          private final String bucketName = "bucket-name";
          private final String bucketFolder = "tmp/";
          private final String tarKey = "tar-dir/tared-file.tar";

          private void createTar() throws IOException, ArchiveException {
          ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
          ListObjectsV2Result result;

          ByteArrayOutputStream baos = new ByteArrayOutputStream();
          TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);

          do {
          result = s3Client.listObjectsV2(req);

          for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
          if (objectSummary.getKey().startsWith(bucketFolder)) {
          S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
          InputStream is = s3Object.getObjectContent();

          String s3Key = objectSummary.getKey();
          String tarPath = s3Key.substring(s3Key.indexOf('/') + 1, s3Key.length());
          s3Key.lastIndexOf('.'));

          byte ba = IOUtils.toByteArray(is);

          TarArchiveEntry archiveEntry = new TarArchiveEntry(tarPath);
          archiveEntry.setSize(ba.length);
          tarOut.putArchiveEntry(archiveEntry);
          tarOut.write(ba);
          tarOut.closeArchiveEntry();
          }
          }

          String token = result.getNextContinuationToken();
          System.out.println("Next Continuation Token: " + token);
          req.setContinuationToken(token);
          } while (result.isTruncated());

          ObjectMetadata metadata = new ObjectMetadata();
          InputStream is = baos.toInputStream();
          metadata.setContentLength(baos.size());
          s3Client.putObject(new PutObjectRequest(bucketName, tarKey, is, metadata));
          }





          share|improve this answer





















          • That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
            – Trinopoty
            14 hours ago










          • @Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
            – Lonergan6275
            13 hours ago










          • I'll try to come up with working code.
            – Trinopoty
            11 hours ago













          up vote
          1
          down vote










          up vote
          1
          down vote









          I have found a solution to this and it very similar to my attempt in Edit 2 above.



          private final String bucketName = "bucket-name";
          private final String bucketFolder = "tmp/";
          private final String tarKey = "tar-dir/tared-file.tar";

          private void createTar() throws IOException, ArchiveException {
          ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
          ListObjectsV2Result result;

          ByteArrayOutputStream baos = new ByteArrayOutputStream();
          TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);

          do {
          result = s3Client.listObjectsV2(req);

          for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
          if (objectSummary.getKey().startsWith(bucketFolder)) {
          S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
          InputStream is = s3Object.getObjectContent();

          String s3Key = objectSummary.getKey();
          String tarPath = s3Key.substring(s3Key.indexOf('/') + 1, s3Key.length());
          s3Key.lastIndexOf('.'));

          byte ba = IOUtils.toByteArray(is);

          TarArchiveEntry archiveEntry = new TarArchiveEntry(tarPath);
          archiveEntry.setSize(ba.length);
          tarOut.putArchiveEntry(archiveEntry);
          tarOut.write(ba);
          tarOut.closeArchiveEntry();
          }
          }

          String token = result.getNextContinuationToken();
          System.out.println("Next Continuation Token: " + token);
          req.setContinuationToken(token);
          } while (result.isTruncated());

          ObjectMetadata metadata = new ObjectMetadata();
          InputStream is = baos.toInputStream();
          metadata.setContentLength(baos.size());
          s3Client.putObject(new PutObjectRequest(bucketName, tarKey, is, metadata));
          }





          share|improve this answer












          I have found a solution to this and it very similar to my attempt in Edit 2 above.



          private final String bucketName = "bucket-name";
          private final String bucketFolder = "tmp/";
          private final String tarKey = "tar-dir/tared-file.tar";

          private void createTar() throws IOException, ArchiveException {
          ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
          ListObjectsV2Result result;

          ByteArrayOutputStream baos = new ByteArrayOutputStream();
          TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);

          do {
          result = s3Client.listObjectsV2(req);

          for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
          if (objectSummary.getKey().startsWith(bucketFolder)) {
          S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
          InputStream is = s3Object.getObjectContent();

          String s3Key = objectSummary.getKey();
          String tarPath = s3Key.substring(s3Key.indexOf('/') + 1, s3Key.length());
          s3Key.lastIndexOf('.'));

          byte ba = IOUtils.toByteArray(is);

          TarArchiveEntry archiveEntry = new TarArchiveEntry(tarPath);
          archiveEntry.setSize(ba.length);
          tarOut.putArchiveEntry(archiveEntry);
          tarOut.write(ba);
          tarOut.closeArchiveEntry();
          }
          }

          String token = result.getNextContinuationToken();
          System.out.println("Next Continuation Token: " + token);
          req.setContinuationToken(token);
          } while (result.isTruncated());

          ObjectMetadata metadata = new ObjectMetadata();
          InputStream is = baos.toInputStream();
          metadata.setContentLength(baos.size());
          s3Client.putObject(new PutObjectRequest(bucketName, tarKey, is, metadata));
          }






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 14 hours ago









          Lonergan6275

          67921748




          67921748












          • That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
            – Trinopoty
            14 hours ago










          • @Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
            – Lonergan6275
            13 hours ago










          • I'll try to come up with working code.
            – Trinopoty
            11 hours ago


















          • That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
            – Trinopoty
            14 hours ago










          • @Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
            – Lonergan6275
            13 hours ago










          • I'll try to come up with working code.
            – Trinopoty
            11 hours ago
















          That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
          – Trinopoty
          14 hours ago




          That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
          – Trinopoty
          14 hours ago












          @Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
          – Lonergan6275
          13 hours ago




          @Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
          – Lonergan6275
          13 hours ago












          I'll try to come up with working code.
          – Trinopoty
          11 hours ago




          I'll try to come up with working code.
          – Trinopoty
          11 hours ago


















           

          draft saved


          draft discarded



















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53265055%2fcreate-tar-archive-from-directory-on-s3-using-aws-lambda%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

          SQL update select statement

          'app-layout' is not a known element: how to share Component with different Modules