`ConnectionClosed` when using `getObject` from amazonka-s3












1















I have a function



import Control.Lens ((^.))
import Data.Conduit (sinkLazy)
import Network.AWS (MonadAWS, send, sinkBody)
import Network.AWS.S3 (BucketName (..), ObjectKey (..), gorsBody, getObject)
import qualified Data.ByteString.Lazy as LBS


getObjectData :: MonadAWS m => Text -> Text -> m LBS.ByteString
getObjectData b k = do
resp <- send $ getObject (BucketName b) (ObjectKey k)
(resp ^. gorsBody) `sinkBody` sinkLazy


whose purpose is to get the data from some object on s3 into a lazy bytestring.



Sending the request is successful, and I can see the response. Of course, the gorsBody field is shown as RsBody { ConduitM () ByteString (ResourceT IO) () } because that is what it is.



When I try the last line of the function, I get something like this:



*** Exception: HttpExceptionRequest Request {
host = "s3.amazonaws.com"
port = 443
secure = True
requestHeaders = [("Host","s3.amazonaws.com"),("X-Amz-Date","20181121T001938Z"),("X-Amz-Content-SHA256","blah"),("X-Amz-Security-Token","blah"),("Authorization","<REDACTED>")]
path = "/path/to/my/file.txt"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 0
responseTimeout = ResponseTimeoutMicro 70000000
requestVersion = HTTP/1.1
}
ConnectionClosed


It sort of seems like this might have something to do with laziness; maybe that the response body was never evaluated before the connection was closed. But that's pure speculation, and in any case, I am not sure how to address it. Does anyone have an idea of what is happening here? It seems like what I'm doing is the right use of amazonka-s3+conduit.



I'm using lts-11.14 and amazonka-s3-1.6.0.










share|improve this question





























    1















    I have a function



    import Control.Lens ((^.))
    import Data.Conduit (sinkLazy)
    import Network.AWS (MonadAWS, send, sinkBody)
    import Network.AWS.S3 (BucketName (..), ObjectKey (..), gorsBody, getObject)
    import qualified Data.ByteString.Lazy as LBS


    getObjectData :: MonadAWS m => Text -> Text -> m LBS.ByteString
    getObjectData b k = do
    resp <- send $ getObject (BucketName b) (ObjectKey k)
    (resp ^. gorsBody) `sinkBody` sinkLazy


    whose purpose is to get the data from some object on s3 into a lazy bytestring.



    Sending the request is successful, and I can see the response. Of course, the gorsBody field is shown as RsBody { ConduitM () ByteString (ResourceT IO) () } because that is what it is.



    When I try the last line of the function, I get something like this:



    *** Exception: HttpExceptionRequest Request {
    host = "s3.amazonaws.com"
    port = 443
    secure = True
    requestHeaders = [("Host","s3.amazonaws.com"),("X-Amz-Date","20181121T001938Z"),("X-Amz-Content-SHA256","blah"),("X-Amz-Security-Token","blah"),("Authorization","<REDACTED>")]
    path = "/path/to/my/file.txt"
    queryString = ""
    method = "GET"
    proxy = Nothing
    rawBody = False
    redirectCount = 0
    responseTimeout = ResponseTimeoutMicro 70000000
    requestVersion = HTTP/1.1
    }
    ConnectionClosed


    It sort of seems like this might have something to do with laziness; maybe that the response body was never evaluated before the connection was closed. But that's pure speculation, and in any case, I am not sure how to address it. Does anyone have an idea of what is happening here? It seems like what I'm doing is the right use of amazonka-s3+conduit.



    I'm using lts-11.14 and amazonka-s3-1.6.0.










    share|improve this question



























      1












      1








      1








      I have a function



      import Control.Lens ((^.))
      import Data.Conduit (sinkLazy)
      import Network.AWS (MonadAWS, send, sinkBody)
      import Network.AWS.S3 (BucketName (..), ObjectKey (..), gorsBody, getObject)
      import qualified Data.ByteString.Lazy as LBS


      getObjectData :: MonadAWS m => Text -> Text -> m LBS.ByteString
      getObjectData b k = do
      resp <- send $ getObject (BucketName b) (ObjectKey k)
      (resp ^. gorsBody) `sinkBody` sinkLazy


      whose purpose is to get the data from some object on s3 into a lazy bytestring.



      Sending the request is successful, and I can see the response. Of course, the gorsBody field is shown as RsBody { ConduitM () ByteString (ResourceT IO) () } because that is what it is.



      When I try the last line of the function, I get something like this:



      *** Exception: HttpExceptionRequest Request {
      host = "s3.amazonaws.com"
      port = 443
      secure = True
      requestHeaders = [("Host","s3.amazonaws.com"),("X-Amz-Date","20181121T001938Z"),("X-Amz-Content-SHA256","blah"),("X-Amz-Security-Token","blah"),("Authorization","<REDACTED>")]
      path = "/path/to/my/file.txt"
      queryString = ""
      method = "GET"
      proxy = Nothing
      rawBody = False
      redirectCount = 0
      responseTimeout = ResponseTimeoutMicro 70000000
      requestVersion = HTTP/1.1
      }
      ConnectionClosed


      It sort of seems like this might have something to do with laziness; maybe that the response body was never evaluated before the connection was closed. But that's pure speculation, and in any case, I am not sure how to address it. Does anyone have an idea of what is happening here? It seems like what I'm doing is the right use of amazonka-s3+conduit.



      I'm using lts-11.14 and amazonka-s3-1.6.0.










      share|improve this question
















      I have a function



      import Control.Lens ((^.))
      import Data.Conduit (sinkLazy)
      import Network.AWS (MonadAWS, send, sinkBody)
      import Network.AWS.S3 (BucketName (..), ObjectKey (..), gorsBody, getObject)
      import qualified Data.ByteString.Lazy as LBS


      getObjectData :: MonadAWS m => Text -> Text -> m LBS.ByteString
      getObjectData b k = do
      resp <- send $ getObject (BucketName b) (ObjectKey k)
      (resp ^. gorsBody) `sinkBody` sinkLazy


      whose purpose is to get the data from some object on s3 into a lazy bytestring.



      Sending the request is successful, and I can see the response. Of course, the gorsBody field is shown as RsBody { ConduitM () ByteString (ResourceT IO) () } because that is what it is.



      When I try the last line of the function, I get something like this:



      *** Exception: HttpExceptionRequest Request {
      host = "s3.amazonaws.com"
      port = 443
      secure = True
      requestHeaders = [("Host","s3.amazonaws.com"),("X-Amz-Date","20181121T001938Z"),("X-Amz-Content-SHA256","blah"),("X-Amz-Security-Token","blah"),("Authorization","<REDACTED>")]
      path = "/path/to/my/file.txt"
      queryString = ""
      method = "GET"
      proxy = Nothing
      rawBody = False
      redirectCount = 0
      responseTimeout = ResponseTimeoutMicro 70000000
      requestVersion = HTTP/1.1
      }
      ConnectionClosed


      It sort of seems like this might have something to do with laziness; maybe that the response body was never evaluated before the connection was closed. But that's pure speculation, and in any case, I am not sure how to address it. Does anyone have an idea of what is happening here? It seems like what I'm doing is the right use of amazonka-s3+conduit.



      I'm using lts-11.14 and amazonka-s3-1.6.0.







      haskell amazon-s3 conduit






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 21 '18 at 4:53







      user4601931

















      asked Nov 21 '18 at 4:27









      user4601931user4601931

      2,11121324




      2,11121324
























          1 Answer
          1






          active

          oldest

          votes


















          1














          As it turns out, this is a known issue with the Stackage release of amazonka-s3, which hasn't been fixed yet. The workaround is to upgrade the amazonka/core/s3 dependencies to point to a fixed version of master:



          # stack.yaml
          extra-deps:
          - git: git@github.com:brendanhay/amazonka
          commit: 248f7b2a7248222cc21cef6194cd1872ba99ac5d
          subdirs:
          - amazonka
          - core
          - amazonka-s3





          share|improve this answer


























          • Have you tried using this hash? I'm running into the same bug even with the hash you mentioned.

            – Ashesh
            Dec 28 '18 at 15:27











          • @Ashesh 248f7b2a7248222cc21cef6194cd1872ba99ac5d is indeed the commit hash that works for me.

            – user4601931
            Dec 28 '18 at 20:26













          • Stupid suggestion, but have you tried a stack clean && rm -rf .stack-work/ && stack build?

            – user4601931
            Dec 28 '18 at 20:28











          • Yeah, it works for me. It was my bad: I was doing runResourceT before consuming the response body.

            – Ashesh
            Dec 30 '18 at 9:07











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53405263%2fconnectionclosed-when-using-getobject-from-amazonka-s3%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          As it turns out, this is a known issue with the Stackage release of amazonka-s3, which hasn't been fixed yet. The workaround is to upgrade the amazonka/core/s3 dependencies to point to a fixed version of master:



          # stack.yaml
          extra-deps:
          - git: git@github.com:brendanhay/amazonka
          commit: 248f7b2a7248222cc21cef6194cd1872ba99ac5d
          subdirs:
          - amazonka
          - core
          - amazonka-s3





          share|improve this answer


























          • Have you tried using this hash? I'm running into the same bug even with the hash you mentioned.

            – Ashesh
            Dec 28 '18 at 15:27











          • @Ashesh 248f7b2a7248222cc21cef6194cd1872ba99ac5d is indeed the commit hash that works for me.

            – user4601931
            Dec 28 '18 at 20:26













          • Stupid suggestion, but have you tried a stack clean && rm -rf .stack-work/ && stack build?

            – user4601931
            Dec 28 '18 at 20:28











          • Yeah, it works for me. It was my bad: I was doing runResourceT before consuming the response body.

            – Ashesh
            Dec 30 '18 at 9:07
















          1














          As it turns out, this is a known issue with the Stackage release of amazonka-s3, which hasn't been fixed yet. The workaround is to upgrade the amazonka/core/s3 dependencies to point to a fixed version of master:



          # stack.yaml
          extra-deps:
          - git: git@github.com:brendanhay/amazonka
          commit: 248f7b2a7248222cc21cef6194cd1872ba99ac5d
          subdirs:
          - amazonka
          - core
          - amazonka-s3





          share|improve this answer


























          • Have you tried using this hash? I'm running into the same bug even with the hash you mentioned.

            – Ashesh
            Dec 28 '18 at 15:27











          • @Ashesh 248f7b2a7248222cc21cef6194cd1872ba99ac5d is indeed the commit hash that works for me.

            – user4601931
            Dec 28 '18 at 20:26













          • Stupid suggestion, but have you tried a stack clean && rm -rf .stack-work/ && stack build?

            – user4601931
            Dec 28 '18 at 20:28











          • Yeah, it works for me. It was my bad: I was doing runResourceT before consuming the response body.

            – Ashesh
            Dec 30 '18 at 9:07














          1












          1








          1







          As it turns out, this is a known issue with the Stackage release of amazonka-s3, which hasn't been fixed yet. The workaround is to upgrade the amazonka/core/s3 dependencies to point to a fixed version of master:



          # stack.yaml
          extra-deps:
          - git: git@github.com:brendanhay/amazonka
          commit: 248f7b2a7248222cc21cef6194cd1872ba99ac5d
          subdirs:
          - amazonka
          - core
          - amazonka-s3





          share|improve this answer















          As it turns out, this is a known issue with the Stackage release of amazonka-s3, which hasn't been fixed yet. The workaround is to upgrade the amazonka/core/s3 dependencies to point to a fixed version of master:



          # stack.yaml
          extra-deps:
          - git: git@github.com:brendanhay/amazonka
          commit: 248f7b2a7248222cc21cef6194cd1872ba99ac5d
          subdirs:
          - amazonka
          - core
          - amazonka-s3






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Dec 28 '18 at 15:24









          Ashesh

          2,12532040




          2,12532040










          answered Nov 21 '18 at 5:25









          user4601931user4601931

          2,11121324




          2,11121324













          • Have you tried using this hash? I'm running into the same bug even with the hash you mentioned.

            – Ashesh
            Dec 28 '18 at 15:27











          • @Ashesh 248f7b2a7248222cc21cef6194cd1872ba99ac5d is indeed the commit hash that works for me.

            – user4601931
            Dec 28 '18 at 20:26













          • Stupid suggestion, but have you tried a stack clean && rm -rf .stack-work/ && stack build?

            – user4601931
            Dec 28 '18 at 20:28











          • Yeah, it works for me. It was my bad: I was doing runResourceT before consuming the response body.

            – Ashesh
            Dec 30 '18 at 9:07



















          • Have you tried using this hash? I'm running into the same bug even with the hash you mentioned.

            – Ashesh
            Dec 28 '18 at 15:27











          • @Ashesh 248f7b2a7248222cc21cef6194cd1872ba99ac5d is indeed the commit hash that works for me.

            – user4601931
            Dec 28 '18 at 20:26













          • Stupid suggestion, but have you tried a stack clean && rm -rf .stack-work/ && stack build?

            – user4601931
            Dec 28 '18 at 20:28











          • Yeah, it works for me. It was my bad: I was doing runResourceT before consuming the response body.

            – Ashesh
            Dec 30 '18 at 9:07

















          Have you tried using this hash? I'm running into the same bug even with the hash you mentioned.

          – Ashesh
          Dec 28 '18 at 15:27





          Have you tried using this hash? I'm running into the same bug even with the hash you mentioned.

          – Ashesh
          Dec 28 '18 at 15:27













          @Ashesh 248f7b2a7248222cc21cef6194cd1872ba99ac5d is indeed the commit hash that works for me.

          – user4601931
          Dec 28 '18 at 20:26







          @Ashesh 248f7b2a7248222cc21cef6194cd1872ba99ac5d is indeed the commit hash that works for me.

          – user4601931
          Dec 28 '18 at 20:26















          Stupid suggestion, but have you tried a stack clean && rm -rf .stack-work/ && stack build?

          – user4601931
          Dec 28 '18 at 20:28





          Stupid suggestion, but have you tried a stack clean && rm -rf .stack-work/ && stack build?

          – user4601931
          Dec 28 '18 at 20:28













          Yeah, it works for me. It was my bad: I was doing runResourceT before consuming the response body.

          – Ashesh
          Dec 30 '18 at 9:07





          Yeah, it works for me. It was my bad: I was doing runResourceT before consuming the response body.

          – Ashesh
          Dec 30 '18 at 9:07


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53405263%2fconnectionclosed-when-using-getobject-from-amazonka-s3%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

          Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

          A Topological Invariant for $pi_3(U(n))$