Create Tar archive from directory on S3 using AWS Lambda
up vote
5
down vote
favorite
I need to extract a bunch of zip files stored on s3 and add them to a tar archive and store that archive on s3. it is likely that that the sum of the zip files will greater than the 512mb local storage allowed from lambda functions. I have a partial souldtion that gets the objects from s3 extracts them and puts them in a s3 object without using the lambda local storage.
Extract object Thread
public class ExtractObject implements Runnable{
private String objectName;
private String uuid;
private final byte buffer = new byte[1024];
public ExtractAdvert(String name, String uuid) {
this.objectName= name;
this.uuid= uuid;
}
@Override
public void run() {
final String srcBucket = "my-bucket-name";
final AmazonS3 s3Client = new AmazonS3Client();
try {
S3Object s3Object = s3Client.getObject(new GetObjectRequest(srcBucket, objectName));
ZipInputStream zis = new ZipInputStream(s3Object.getObjectContent());
ZipEntry entry = zis.getNextEntry();
while(entry != null) {
String fileName = entry.getName();
String mimeType = FileMimeType.fromExtension(FilenameUtils.getExtension(fileName)).mimeType();
System.out.println("Extracting " + fileName + ", compressed: " + entry.getCompressedSize() + " bytes, extracted: " + entry.getSize() + " bytes, mimetype: " + mimeType);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
int len;
while ((len = zis.read(buffer)) > 0) {
outputStream.write(buffer, 0, len);
}
InputStream is = new ByteArrayInputStream(outputStream.toByteArray());
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(outputStream.size());
meta.setContentType(mimeType);
System.out.println("##### " + srcBucket + ", " + FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName);
// Add this to tar archive instead of putting back to s3
s3Client.putObject(srcBucket, FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName, is, meta);
is.close();
outputStream.close();
entry = zis.getNextEntry();
}
zis.closeEntry();
zis.close();
} catch (IOException ioe) {
System.out.println(ioe.getMessage());
}
}
}
this runs for each object that needs to be extracted and saves them in a s3 object in the structure needed for the tar file.
I think what i need is instead of putting the object back to s3 is to keep it in memory and add it to a tar archive. and upload that but after a lot of looking around and trial and error i have not created a successful tar file.
The main issue is i can't use the tmp directory in lambda.
Edit
should i be creating the tar file as i go instead of putting objects to s3? (see comment // Add this to tar archive instead of putting back to s3
)
if so how do i create a tar stream without a storing it locally?
EDIT 2: Attempt at taring the files
ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result result;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);
do {
result = s3Client.listObjectsV2(req);
for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
if(objectSummary.getKey().startsWith("tmp/") ) {
System.out.printf(" - %s (size: %d)n", objectSummary.getKey(), objectSummary.getSize());
S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
InputStream is = s3Object.getObjectContent();
System.out.println("Pre Create entry");
TarArchiveEntry archiveEntry = new TarArchiveEntry(IOUtils.toByteArray(is));
// Getting following exception above
// IllegalArgumentException: Invalid byte 111 at offset 7 in ' positio' len=8
System.out.println("Pre put entry");
tarOut.putArchiveEntry(archiveEntry);
System.out.println("Post put entry");
}
}
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
ObjectMetadata metadata = new ObjectMetadata();
InputStream is = new ByteArrayInputStream(baos.toByteArray());
s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + "tar-file", is, metadata));
java aws-lambda inputstream aws-sdk tar
This question has an open bounty worth +50
reputation from Lonergan6275 ending in 5 days.
This question has not received enough attention.
add a comment |
up vote
5
down vote
favorite
I need to extract a bunch of zip files stored on s3 and add them to a tar archive and store that archive on s3. it is likely that that the sum of the zip files will greater than the 512mb local storage allowed from lambda functions. I have a partial souldtion that gets the objects from s3 extracts them and puts them in a s3 object without using the lambda local storage.
Extract object Thread
public class ExtractObject implements Runnable{
private String objectName;
private String uuid;
private final byte buffer = new byte[1024];
public ExtractAdvert(String name, String uuid) {
this.objectName= name;
this.uuid= uuid;
}
@Override
public void run() {
final String srcBucket = "my-bucket-name";
final AmazonS3 s3Client = new AmazonS3Client();
try {
S3Object s3Object = s3Client.getObject(new GetObjectRequest(srcBucket, objectName));
ZipInputStream zis = new ZipInputStream(s3Object.getObjectContent());
ZipEntry entry = zis.getNextEntry();
while(entry != null) {
String fileName = entry.getName();
String mimeType = FileMimeType.fromExtension(FilenameUtils.getExtension(fileName)).mimeType();
System.out.println("Extracting " + fileName + ", compressed: " + entry.getCompressedSize() + " bytes, extracted: " + entry.getSize() + " bytes, mimetype: " + mimeType);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
int len;
while ((len = zis.read(buffer)) > 0) {
outputStream.write(buffer, 0, len);
}
InputStream is = new ByteArrayInputStream(outputStream.toByteArray());
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(outputStream.size());
meta.setContentType(mimeType);
System.out.println("##### " + srcBucket + ", " + FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName);
// Add this to tar archive instead of putting back to s3
s3Client.putObject(srcBucket, FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName, is, meta);
is.close();
outputStream.close();
entry = zis.getNextEntry();
}
zis.closeEntry();
zis.close();
} catch (IOException ioe) {
System.out.println(ioe.getMessage());
}
}
}
this runs for each object that needs to be extracted and saves them in a s3 object in the structure needed for the tar file.
I think what i need is instead of putting the object back to s3 is to keep it in memory and add it to a tar archive. and upload that but after a lot of looking around and trial and error i have not created a successful tar file.
The main issue is i can't use the tmp directory in lambda.
Edit
should i be creating the tar file as i go instead of putting objects to s3? (see comment // Add this to tar archive instead of putting back to s3
)
if so how do i create a tar stream without a storing it locally?
EDIT 2: Attempt at taring the files
ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result result;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);
do {
result = s3Client.listObjectsV2(req);
for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
if(objectSummary.getKey().startsWith("tmp/") ) {
System.out.printf(" - %s (size: %d)n", objectSummary.getKey(), objectSummary.getSize());
S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
InputStream is = s3Object.getObjectContent();
System.out.println("Pre Create entry");
TarArchiveEntry archiveEntry = new TarArchiveEntry(IOUtils.toByteArray(is));
// Getting following exception above
// IllegalArgumentException: Invalid byte 111 at offset 7 in ' positio' len=8
System.out.println("Pre put entry");
tarOut.putArchiveEntry(archiveEntry);
System.out.println("Post put entry");
}
}
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
ObjectMetadata metadata = new ObjectMetadata();
InputStream is = new ByteArrayInputStream(baos.toByteArray());
s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + "tar-file", is, metadata));
java aws-lambda inputstream aws-sdk tar
This question has an open bounty worth +50
reputation from Lonergan6275 ending in 5 days.
This question has not received enough attention.
add a comment |
up vote
5
down vote
favorite
up vote
5
down vote
favorite
I need to extract a bunch of zip files stored on s3 and add them to a tar archive and store that archive on s3. it is likely that that the sum of the zip files will greater than the 512mb local storage allowed from lambda functions. I have a partial souldtion that gets the objects from s3 extracts them and puts them in a s3 object without using the lambda local storage.
Extract object Thread
public class ExtractObject implements Runnable{
private String objectName;
private String uuid;
private final byte buffer = new byte[1024];
public ExtractAdvert(String name, String uuid) {
this.objectName= name;
this.uuid= uuid;
}
@Override
public void run() {
final String srcBucket = "my-bucket-name";
final AmazonS3 s3Client = new AmazonS3Client();
try {
S3Object s3Object = s3Client.getObject(new GetObjectRequest(srcBucket, objectName));
ZipInputStream zis = new ZipInputStream(s3Object.getObjectContent());
ZipEntry entry = zis.getNextEntry();
while(entry != null) {
String fileName = entry.getName();
String mimeType = FileMimeType.fromExtension(FilenameUtils.getExtension(fileName)).mimeType();
System.out.println("Extracting " + fileName + ", compressed: " + entry.getCompressedSize() + " bytes, extracted: " + entry.getSize() + " bytes, mimetype: " + mimeType);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
int len;
while ((len = zis.read(buffer)) > 0) {
outputStream.write(buffer, 0, len);
}
InputStream is = new ByteArrayInputStream(outputStream.toByteArray());
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(outputStream.size());
meta.setContentType(mimeType);
System.out.println("##### " + srcBucket + ", " + FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName);
// Add this to tar archive instead of putting back to s3
s3Client.putObject(srcBucket, FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName, is, meta);
is.close();
outputStream.close();
entry = zis.getNextEntry();
}
zis.closeEntry();
zis.close();
} catch (IOException ioe) {
System.out.println(ioe.getMessage());
}
}
}
this runs for each object that needs to be extracted and saves them in a s3 object in the structure needed for the tar file.
I think what i need is instead of putting the object back to s3 is to keep it in memory and add it to a tar archive. and upload that but after a lot of looking around and trial and error i have not created a successful tar file.
The main issue is i can't use the tmp directory in lambda.
Edit
should i be creating the tar file as i go instead of putting objects to s3? (see comment // Add this to tar archive instead of putting back to s3
)
if so how do i create a tar stream without a storing it locally?
EDIT 2: Attempt at taring the files
ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result result;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);
do {
result = s3Client.listObjectsV2(req);
for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
if(objectSummary.getKey().startsWith("tmp/") ) {
System.out.printf(" - %s (size: %d)n", objectSummary.getKey(), objectSummary.getSize());
S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
InputStream is = s3Object.getObjectContent();
System.out.println("Pre Create entry");
TarArchiveEntry archiveEntry = new TarArchiveEntry(IOUtils.toByteArray(is));
// Getting following exception above
// IllegalArgumentException: Invalid byte 111 at offset 7 in ' positio' len=8
System.out.println("Pre put entry");
tarOut.putArchiveEntry(archiveEntry);
System.out.println("Post put entry");
}
}
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
ObjectMetadata metadata = new ObjectMetadata();
InputStream is = new ByteArrayInputStream(baos.toByteArray());
s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + "tar-file", is, metadata));
java aws-lambda inputstream aws-sdk tar
I need to extract a bunch of zip files stored on s3 and add them to a tar archive and store that archive on s3. it is likely that that the sum of the zip files will greater than the 512mb local storage allowed from lambda functions. I have a partial souldtion that gets the objects from s3 extracts them and puts them in a s3 object without using the lambda local storage.
Extract object Thread
public class ExtractObject implements Runnable{
private String objectName;
private String uuid;
private final byte buffer = new byte[1024];
public ExtractAdvert(String name, String uuid) {
this.objectName= name;
this.uuid= uuid;
}
@Override
public void run() {
final String srcBucket = "my-bucket-name";
final AmazonS3 s3Client = new AmazonS3Client();
try {
S3Object s3Object = s3Client.getObject(new GetObjectRequest(srcBucket, objectName));
ZipInputStream zis = new ZipInputStream(s3Object.getObjectContent());
ZipEntry entry = zis.getNextEntry();
while(entry != null) {
String fileName = entry.getName();
String mimeType = FileMimeType.fromExtension(FilenameUtils.getExtension(fileName)).mimeType();
System.out.println("Extracting " + fileName + ", compressed: " + entry.getCompressedSize() + " bytes, extracted: " + entry.getSize() + " bytes, mimetype: " + mimeType);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
int len;
while ((len = zis.read(buffer)) > 0) {
outputStream.write(buffer, 0, len);
}
InputStream is = new ByteArrayInputStream(outputStream.toByteArray());
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(outputStream.size());
meta.setContentType(mimeType);
System.out.println("##### " + srcBucket + ", " + FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName);
// Add this to tar archive instead of putting back to s3
s3Client.putObject(srcBucket, FilenameUtils.getFullPath(objectName) + "tmp" + File.separator + uuid + File.separator + fileName, is, meta);
is.close();
outputStream.close();
entry = zis.getNextEntry();
}
zis.closeEntry();
zis.close();
} catch (IOException ioe) {
System.out.println(ioe.getMessage());
}
}
}
this runs for each object that needs to be extracted and saves them in a s3 object in the structure needed for the tar file.
I think what i need is instead of putting the object back to s3 is to keep it in memory and add it to a tar archive. and upload that but after a lot of looking around and trial and error i have not created a successful tar file.
The main issue is i can't use the tmp directory in lambda.
Edit
should i be creating the tar file as i go instead of putting objects to s3? (see comment // Add this to tar archive instead of putting back to s3
)
if so how do i create a tar stream without a storing it locally?
EDIT 2: Attempt at taring the files
ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result result;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);
do {
result = s3Client.listObjectsV2(req);
for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
if(objectSummary.getKey().startsWith("tmp/") ) {
System.out.printf(" - %s (size: %d)n", objectSummary.getKey(), objectSummary.getSize());
S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
InputStream is = s3Object.getObjectContent();
System.out.println("Pre Create entry");
TarArchiveEntry archiveEntry = new TarArchiveEntry(IOUtils.toByteArray(is));
// Getting following exception above
// IllegalArgumentException: Invalid byte 111 at offset 7 in ' positio' len=8
System.out.println("Pre put entry");
tarOut.putArchiveEntry(archiveEntry);
System.out.println("Post put entry");
}
}
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
ObjectMetadata metadata = new ObjectMetadata();
InputStream is = new ByteArrayInputStream(baos.toByteArray());
s3Client.putObject(new PutObjectRequest(bucketName, bucketFolder + "tar-file", is, metadata));
java aws-lambda inputstream aws-sdk tar
java aws-lambda inputstream aws-sdk tar
edited Nov 16 at 16:19
asked Nov 12 at 15:14
Lonergan6275
67921748
67921748
This question has an open bounty worth +50
reputation from Lonergan6275 ending in 5 days.
This question has not received enough attention.
This question has an open bounty worth +50
reputation from Lonergan6275 ending in 5 days.
This question has not received enough attention.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
I have found a solution to this and it very similar to my attempt in Edit 2 above.
private final String bucketName = "bucket-name";
private final String bucketFolder = "tmp/";
private final String tarKey = "tar-dir/tared-file.tar";
private void createTar() throws IOException, ArchiveException {
ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result result;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);
do {
result = s3Client.listObjectsV2(req);
for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
if (objectSummary.getKey().startsWith(bucketFolder)) {
S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
InputStream is = s3Object.getObjectContent();
String s3Key = objectSummary.getKey();
String tarPath = s3Key.substring(s3Key.indexOf('/') + 1, s3Key.length());
s3Key.lastIndexOf('.'));
byte ba = IOUtils.toByteArray(is);
TarArchiveEntry archiveEntry = new TarArchiveEntry(tarPath);
archiveEntry.setSize(ba.length);
tarOut.putArchiveEntry(archiveEntry);
tarOut.write(ba);
tarOut.closeArchiveEntry();
}
}
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
ObjectMetadata metadata = new ObjectMetadata();
InputStream is = baos.toInputStream();
metadata.setContentLength(baos.size());
s3Client.putObject(new PutObjectRequest(bucketName, tarKey, is, metadata));
}
That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
– Trinopoty
14 hours ago
@Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
– Lonergan6275
13 hours ago
I'll try to come up with working code.
– Trinopoty
11 hours ago
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
I have found a solution to this and it very similar to my attempt in Edit 2 above.
private final String bucketName = "bucket-name";
private final String bucketFolder = "tmp/";
private final String tarKey = "tar-dir/tared-file.tar";
private void createTar() throws IOException, ArchiveException {
ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result result;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);
do {
result = s3Client.listObjectsV2(req);
for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
if (objectSummary.getKey().startsWith(bucketFolder)) {
S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
InputStream is = s3Object.getObjectContent();
String s3Key = objectSummary.getKey();
String tarPath = s3Key.substring(s3Key.indexOf('/') + 1, s3Key.length());
s3Key.lastIndexOf('.'));
byte ba = IOUtils.toByteArray(is);
TarArchiveEntry archiveEntry = new TarArchiveEntry(tarPath);
archiveEntry.setSize(ba.length);
tarOut.putArchiveEntry(archiveEntry);
tarOut.write(ba);
tarOut.closeArchiveEntry();
}
}
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
ObjectMetadata metadata = new ObjectMetadata();
InputStream is = baos.toInputStream();
metadata.setContentLength(baos.size());
s3Client.putObject(new PutObjectRequest(bucketName, tarKey, is, metadata));
}
That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
– Trinopoty
14 hours ago
@Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
– Lonergan6275
13 hours ago
I'll try to come up with working code.
– Trinopoty
11 hours ago
add a comment |
up vote
1
down vote
I have found a solution to this and it very similar to my attempt in Edit 2 above.
private final String bucketName = "bucket-name";
private final String bucketFolder = "tmp/";
private final String tarKey = "tar-dir/tared-file.tar";
private void createTar() throws IOException, ArchiveException {
ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result result;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);
do {
result = s3Client.listObjectsV2(req);
for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
if (objectSummary.getKey().startsWith(bucketFolder)) {
S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
InputStream is = s3Object.getObjectContent();
String s3Key = objectSummary.getKey();
String tarPath = s3Key.substring(s3Key.indexOf('/') + 1, s3Key.length());
s3Key.lastIndexOf('.'));
byte ba = IOUtils.toByteArray(is);
TarArchiveEntry archiveEntry = new TarArchiveEntry(tarPath);
archiveEntry.setSize(ba.length);
tarOut.putArchiveEntry(archiveEntry);
tarOut.write(ba);
tarOut.closeArchiveEntry();
}
}
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
ObjectMetadata metadata = new ObjectMetadata();
InputStream is = baos.toInputStream();
metadata.setContentLength(baos.size());
s3Client.putObject(new PutObjectRequest(bucketName, tarKey, is, metadata));
}
That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
– Trinopoty
14 hours ago
@Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
– Lonergan6275
13 hours ago
I'll try to come up with working code.
– Trinopoty
11 hours ago
add a comment |
up vote
1
down vote
up vote
1
down vote
I have found a solution to this and it very similar to my attempt in Edit 2 above.
private final String bucketName = "bucket-name";
private final String bucketFolder = "tmp/";
private final String tarKey = "tar-dir/tared-file.tar";
private void createTar() throws IOException, ArchiveException {
ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result result;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);
do {
result = s3Client.listObjectsV2(req);
for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
if (objectSummary.getKey().startsWith(bucketFolder)) {
S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
InputStream is = s3Object.getObjectContent();
String s3Key = objectSummary.getKey();
String tarPath = s3Key.substring(s3Key.indexOf('/') + 1, s3Key.length());
s3Key.lastIndexOf('.'));
byte ba = IOUtils.toByteArray(is);
TarArchiveEntry archiveEntry = new TarArchiveEntry(tarPath);
archiveEntry.setSize(ba.length);
tarOut.putArchiveEntry(archiveEntry);
tarOut.write(ba);
tarOut.closeArchiveEntry();
}
}
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
ObjectMetadata metadata = new ObjectMetadata();
InputStream is = baos.toInputStream();
metadata.setContentLength(baos.size());
s3Client.putObject(new PutObjectRequest(bucketName, tarKey, is, metadata));
}
I have found a solution to this and it very similar to my attempt in Edit 2 above.
private final String bucketName = "bucket-name";
private final String bucketFolder = "tmp/";
private final String tarKey = "tar-dir/tared-file.tar";
private void createTar() throws IOException, ArchiveException {
ListObjectsV2Request req = new ListObjectsV2Request().withBucketName(bucketName);
ListObjectsV2Result result;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
TarArchiveOutputStream tarOut = new TarArchiveOutputStream(baos);
do {
result = s3Client.listObjectsV2(req);
for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {
if (objectSummary.getKey().startsWith(bucketFolder)) {
S3Object s3Object = s3Client.getObject(new GetObjectRequest(bucketName, objectSummary.getKey()));
InputStream is = s3Object.getObjectContent();
String s3Key = objectSummary.getKey();
String tarPath = s3Key.substring(s3Key.indexOf('/') + 1, s3Key.length());
s3Key.lastIndexOf('.'));
byte ba = IOUtils.toByteArray(is);
TarArchiveEntry archiveEntry = new TarArchiveEntry(tarPath);
archiveEntry.setSize(ba.length);
tarOut.putArchiveEntry(archiveEntry);
tarOut.write(ba);
tarOut.closeArchiveEntry();
}
}
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
ObjectMetadata metadata = new ObjectMetadata();
InputStream is = baos.toInputStream();
metadata.setContentLength(baos.size());
s3Client.putObject(new PutObjectRequest(bucketName, tarKey, is, metadata));
}
answered 14 hours ago
Lonergan6275
67921748
67921748
That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
– Trinopoty
14 hours ago
@Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
– Lonergan6275
13 hours ago
I'll try to come up with working code.
– Trinopoty
11 hours ago
add a comment |
That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
– Trinopoty
14 hours ago
@Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
– Lonergan6275
13 hours ago
I'll try to come up with working code.
– Trinopoty
11 hours ago
That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
– Trinopoty
14 hours ago
That's a terrible way to go about it. The best option would be to use multi-part uploads to S3. It'll go something like: add file to tar, upload bytes using multipart, add next file...
– Trinopoty
14 hours ago
@Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
– Lonergan6275
13 hours ago
@Trinopoty thanks for your input the bounty is still open if you would like to elaborate on your suggestion. either way i will look into it.
– Lonergan6275
13 hours ago
I'll try to come up with working code.
– Trinopoty
11 hours ago
I'll try to come up with working code.
– Trinopoty
11 hours ago
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53265055%2fcreate-tar-archive-from-directory-on-s3-using-aws-lambda%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown