Acknowledged messages are redelivered when crashed core client reconnects












0














I have setup a standalone HornetQ instance which is running locally. For testing purposes I have created a consumer using the HornetQ core API which will receive a message every 500 milliseconds.



I am facing a strange behaviour on the consumer side when my client connects and reads all the messages from queue and if I forced shutdown this (without properly closing the session/connection) then next time I start this consumer again it will read the old messages from the queue. Here is my consumer example:



// HornetQ Consumer Code



   public void readMessage() {
ClientSession session = null;
try {
if (sf != null) {
session = sf.createSession(true, true);

ClientConsumer messageConsumer = session.createConsumer(JMS_QUEUE_NAME);
session.start();

while (true) {
ClientMessage messageReceived = messageConsumer.receive(1000);
if (messageReceived != null && messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME) != null) {
System.out.println("Received JMS TextMessage:" + messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME));
messageReceived.acknowledge();
}

Thread.sleep(500);
}
}
} catch (Exception e) {
LOGGER.error("Error while adding message by producer.", e);
} finally {
try {
session.close();
} catch (HornetQException e) {
LOGGER.error("Error while closing producer session,", e);
}
}
}


Can someone tell me why it is working like this, and what kind of configuration should I use in client/server side so that if a message read by consumer it will delete this from a queue?










share|improve this question
























  • You appear to be using the HornetQ "core" API rather than JMS (since ClientMessage isn't a JMS object). Can you confirm?
    – Justin Bertram
    Nov 19 '18 at 18:21












  • Sorry for late response. Yes this is true ClientMessage is from HornetQ core API. But how is it relate with a problem ?
    – user565
    Nov 20 '18 at 13:35










  • It's related to the problem because the two APIs behave differently. You are likely not committing the session after the acknowledgements are complete. Are you committing the session at any point or are you creating the session with auto-commit enabled for acknowledgements?
    – Justin Bertram
    Nov 20 '18 at 14:41










  • I update code in question. This is how i am testing a consumer in HornetQ
    – user565
    Nov 20 '18 at 14:44










  • I also tried with sf.createSession(true,false) but get same experience that message re delivered when client restart and connect with server. Is there any server side configuration required for this ? i don't want to set message expiry in queue but want to do that if message read by consumer then delete it from queue
    – user565
    Nov 20 '18 at 14:51


















0














I have setup a standalone HornetQ instance which is running locally. For testing purposes I have created a consumer using the HornetQ core API which will receive a message every 500 milliseconds.



I am facing a strange behaviour on the consumer side when my client connects and reads all the messages from queue and if I forced shutdown this (without properly closing the session/connection) then next time I start this consumer again it will read the old messages from the queue. Here is my consumer example:



// HornetQ Consumer Code



   public void readMessage() {
ClientSession session = null;
try {
if (sf != null) {
session = sf.createSession(true, true);

ClientConsumer messageConsumer = session.createConsumer(JMS_QUEUE_NAME);
session.start();

while (true) {
ClientMessage messageReceived = messageConsumer.receive(1000);
if (messageReceived != null && messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME) != null) {
System.out.println("Received JMS TextMessage:" + messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME));
messageReceived.acknowledge();
}

Thread.sleep(500);
}
}
} catch (Exception e) {
LOGGER.error("Error while adding message by producer.", e);
} finally {
try {
session.close();
} catch (HornetQException e) {
LOGGER.error("Error while closing producer session,", e);
}
}
}


Can someone tell me why it is working like this, and what kind of configuration should I use in client/server side so that if a message read by consumer it will delete this from a queue?










share|improve this question
























  • You appear to be using the HornetQ "core" API rather than JMS (since ClientMessage isn't a JMS object). Can you confirm?
    – Justin Bertram
    Nov 19 '18 at 18:21












  • Sorry for late response. Yes this is true ClientMessage is from HornetQ core API. But how is it relate with a problem ?
    – user565
    Nov 20 '18 at 13:35










  • It's related to the problem because the two APIs behave differently. You are likely not committing the session after the acknowledgements are complete. Are you committing the session at any point or are you creating the session with auto-commit enabled for acknowledgements?
    – Justin Bertram
    Nov 20 '18 at 14:41










  • I update code in question. This is how i am testing a consumer in HornetQ
    – user565
    Nov 20 '18 at 14:44










  • I also tried with sf.createSession(true,false) but get same experience that message re delivered when client restart and connect with server. Is there any server side configuration required for this ? i don't want to set message expiry in queue but want to do that if message read by consumer then delete it from queue
    – user565
    Nov 20 '18 at 14:51
















0












0








0







I have setup a standalone HornetQ instance which is running locally. For testing purposes I have created a consumer using the HornetQ core API which will receive a message every 500 milliseconds.



I am facing a strange behaviour on the consumer side when my client connects and reads all the messages from queue and if I forced shutdown this (without properly closing the session/connection) then next time I start this consumer again it will read the old messages from the queue. Here is my consumer example:



// HornetQ Consumer Code



   public void readMessage() {
ClientSession session = null;
try {
if (sf != null) {
session = sf.createSession(true, true);

ClientConsumer messageConsumer = session.createConsumer(JMS_QUEUE_NAME);
session.start();

while (true) {
ClientMessage messageReceived = messageConsumer.receive(1000);
if (messageReceived != null && messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME) != null) {
System.out.println("Received JMS TextMessage:" + messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME));
messageReceived.acknowledge();
}

Thread.sleep(500);
}
}
} catch (Exception e) {
LOGGER.error("Error while adding message by producer.", e);
} finally {
try {
session.close();
} catch (HornetQException e) {
LOGGER.error("Error while closing producer session,", e);
}
}
}


Can someone tell me why it is working like this, and what kind of configuration should I use in client/server side so that if a message read by consumer it will delete this from a queue?










share|improve this question















I have setup a standalone HornetQ instance which is running locally. For testing purposes I have created a consumer using the HornetQ core API which will receive a message every 500 milliseconds.



I am facing a strange behaviour on the consumer side when my client connects and reads all the messages from queue and if I forced shutdown this (without properly closing the session/connection) then next time I start this consumer again it will read the old messages from the queue. Here is my consumer example:



// HornetQ Consumer Code



   public void readMessage() {
ClientSession session = null;
try {
if (sf != null) {
session = sf.createSession(true, true);

ClientConsumer messageConsumer = session.createConsumer(JMS_QUEUE_NAME);
session.start();

while (true) {
ClientMessage messageReceived = messageConsumer.receive(1000);
if (messageReceived != null && messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME) != null) {
System.out.println("Received JMS TextMessage:" + messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME));
messageReceived.acknowledge();
}

Thread.sleep(500);
}
}
} catch (Exception e) {
LOGGER.error("Error while adding message by producer.", e);
} finally {
try {
session.close();
} catch (HornetQException e) {
LOGGER.error("Error while closing producer session,", e);
}
}
}


Can someone tell me why it is working like this, and what kind of configuration should I use in client/server side so that if a message read by consumer it will delete this from a queue?







java hornetq






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 20 '18 at 18:37









Justin Bertram

3,0371316




3,0371316










asked Nov 19 '18 at 14:29









user565

13711




13711












  • You appear to be using the HornetQ "core" API rather than JMS (since ClientMessage isn't a JMS object). Can you confirm?
    – Justin Bertram
    Nov 19 '18 at 18:21












  • Sorry for late response. Yes this is true ClientMessage is from HornetQ core API. But how is it relate with a problem ?
    – user565
    Nov 20 '18 at 13:35










  • It's related to the problem because the two APIs behave differently. You are likely not committing the session after the acknowledgements are complete. Are you committing the session at any point or are you creating the session with auto-commit enabled for acknowledgements?
    – Justin Bertram
    Nov 20 '18 at 14:41










  • I update code in question. This is how i am testing a consumer in HornetQ
    – user565
    Nov 20 '18 at 14:44










  • I also tried with sf.createSession(true,false) but get same experience that message re delivered when client restart and connect with server. Is there any server side configuration required for this ? i don't want to set message expiry in queue but want to do that if message read by consumer then delete it from queue
    – user565
    Nov 20 '18 at 14:51




















  • You appear to be using the HornetQ "core" API rather than JMS (since ClientMessage isn't a JMS object). Can you confirm?
    – Justin Bertram
    Nov 19 '18 at 18:21












  • Sorry for late response. Yes this is true ClientMessage is from HornetQ core API. But how is it relate with a problem ?
    – user565
    Nov 20 '18 at 13:35










  • It's related to the problem because the two APIs behave differently. You are likely not committing the session after the acknowledgements are complete. Are you committing the session at any point or are you creating the session with auto-commit enabled for acknowledgements?
    – Justin Bertram
    Nov 20 '18 at 14:41










  • I update code in question. This is how i am testing a consumer in HornetQ
    – user565
    Nov 20 '18 at 14:44










  • I also tried with sf.createSession(true,false) but get same experience that message re delivered when client restart and connect with server. Is there any server side configuration required for this ? i don't want to set message expiry in queue but want to do that if message read by consumer then delete it from queue
    – user565
    Nov 20 '18 at 14:51


















You appear to be using the HornetQ "core" API rather than JMS (since ClientMessage isn't a JMS object). Can you confirm?
– Justin Bertram
Nov 19 '18 at 18:21






You appear to be using the HornetQ "core" API rather than JMS (since ClientMessage isn't a JMS object). Can you confirm?
– Justin Bertram
Nov 19 '18 at 18:21














Sorry for late response. Yes this is true ClientMessage is from HornetQ core API. But how is it relate with a problem ?
– user565
Nov 20 '18 at 13:35




Sorry for late response. Yes this is true ClientMessage is from HornetQ core API. But how is it relate with a problem ?
– user565
Nov 20 '18 at 13:35












It's related to the problem because the two APIs behave differently. You are likely not committing the session after the acknowledgements are complete. Are you committing the session at any point or are you creating the session with auto-commit enabled for acknowledgements?
– Justin Bertram
Nov 20 '18 at 14:41




It's related to the problem because the two APIs behave differently. You are likely not committing the session after the acknowledgements are complete. Are you committing the session at any point or are you creating the session with auto-commit enabled for acknowledgements?
– Justin Bertram
Nov 20 '18 at 14:41












I update code in question. This is how i am testing a consumer in HornetQ
– user565
Nov 20 '18 at 14:44




I update code in question. This is how i am testing a consumer in HornetQ
– user565
Nov 20 '18 at 14:44












I also tried with sf.createSession(true,false) but get same experience that message re delivered when client restart and connect with server. Is there any server side configuration required for this ? i don't want to set message expiry in queue but want to do that if message read by consumer then delete it from queue
– user565
Nov 20 '18 at 14:51






I also tried with sf.createSession(true,false) but get same experience that message re delivered when client restart and connect with server. Is there any server side configuration required for this ? i don't want to set message expiry in queue but want to do that if message read by consumer then delete it from queue
– user565
Nov 20 '18 at 14:51














1 Answer
1






active

oldest

votes


















1














You are not committing the session after the acknowledgements are complete, and you are not creating the session with auto-commit for acknowledgements enabled. Therefore, you should do one of the following:




  • Either explicitly call session.commit() after one or more invocations of acknowledge()

  • Or enable implicit auto-commit for acknowledgements by creating the session using sf.createSession(true,true) or sf.createSession(false,true) (the boolean which controls auto-commit for acknowledgements is the second one).


Keep in mind that when you enable auto-commit for acknowledgements there is an internal buffer which needs to reach a particular size before the acknowledgements are flushed to the broker. Batching acknowledgements like this can drastically improve performance for certain high-volume use-cases. By default you need to acknowledge 1,048,576 bytes worth of messages in order to flush the buffer and send the acknowledgements to the broker. You can change the size of this buffer by invoking setAckBatchSize on your ServerLocator instance or by using a different createSession method (e.g. sf.createSession(true, true, myAckBatchSize)).



If the acknowledgement buffer isn't flushed and your client crashes then the corresponding messages will still be in the queue when the client comes back. If the buffer hasn't reached its threshold it will still be flushed anyway when the consumer is closed gracefully.






share|improve this answer























  • ok, Let me test this way.
    – user565
    Nov 20 '18 at 15:39










  • It is still doing same way. I think i am doing something stupid either in my Producer OR Consumer code. I have updated a question again can you please have a look and suggest me is this a proper way to use autoCommit session in both place.
    – user565
    Nov 20 '18 at 16:14












  • I have removed your producer code from the question as it's completely unrelated to the issue, and I've added some clarification to my answer.
    – Justin Bertram
    Nov 20 '18 at 18:26










  • Actually i am following an example which available in HornetQ source and for a Consumer implementation, it is not mentioned anything which use to set a MaxBatchSize after acknowledge. I am looking a documentation now to see how to set this info.
    – user565
    Nov 20 '18 at 18:29










  • I already outlined the simplest ways you can set the acknowledgement batch size. As for the example in the HornetQ source, my guess is that it's relying on the close() invocation to flush the acknowledgement buffer or it's acknowledging enough messages to flush the buffer on its own. Can you provide a link to the relevant HornetQ source?
    – Justin Bertram
    Nov 20 '18 at 18:33













Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53376762%2facknowledged-messages-are-redelivered-when-crashed-core-client-reconnects%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














You are not committing the session after the acknowledgements are complete, and you are not creating the session with auto-commit for acknowledgements enabled. Therefore, you should do one of the following:




  • Either explicitly call session.commit() after one or more invocations of acknowledge()

  • Or enable implicit auto-commit for acknowledgements by creating the session using sf.createSession(true,true) or sf.createSession(false,true) (the boolean which controls auto-commit for acknowledgements is the second one).


Keep in mind that when you enable auto-commit for acknowledgements there is an internal buffer which needs to reach a particular size before the acknowledgements are flushed to the broker. Batching acknowledgements like this can drastically improve performance for certain high-volume use-cases. By default you need to acknowledge 1,048,576 bytes worth of messages in order to flush the buffer and send the acknowledgements to the broker. You can change the size of this buffer by invoking setAckBatchSize on your ServerLocator instance or by using a different createSession method (e.g. sf.createSession(true, true, myAckBatchSize)).



If the acknowledgement buffer isn't flushed and your client crashes then the corresponding messages will still be in the queue when the client comes back. If the buffer hasn't reached its threshold it will still be flushed anyway when the consumer is closed gracefully.






share|improve this answer























  • ok, Let me test this way.
    – user565
    Nov 20 '18 at 15:39










  • It is still doing same way. I think i am doing something stupid either in my Producer OR Consumer code. I have updated a question again can you please have a look and suggest me is this a proper way to use autoCommit session in both place.
    – user565
    Nov 20 '18 at 16:14












  • I have removed your producer code from the question as it's completely unrelated to the issue, and I've added some clarification to my answer.
    – Justin Bertram
    Nov 20 '18 at 18:26










  • Actually i am following an example which available in HornetQ source and for a Consumer implementation, it is not mentioned anything which use to set a MaxBatchSize after acknowledge. I am looking a documentation now to see how to set this info.
    – user565
    Nov 20 '18 at 18:29










  • I already outlined the simplest ways you can set the acknowledgement batch size. As for the example in the HornetQ source, my guess is that it's relying on the close() invocation to flush the acknowledgement buffer or it's acknowledging enough messages to flush the buffer on its own. Can you provide a link to the relevant HornetQ source?
    – Justin Bertram
    Nov 20 '18 at 18:33


















1














You are not committing the session after the acknowledgements are complete, and you are not creating the session with auto-commit for acknowledgements enabled. Therefore, you should do one of the following:




  • Either explicitly call session.commit() after one or more invocations of acknowledge()

  • Or enable implicit auto-commit for acknowledgements by creating the session using sf.createSession(true,true) or sf.createSession(false,true) (the boolean which controls auto-commit for acknowledgements is the second one).


Keep in mind that when you enable auto-commit for acknowledgements there is an internal buffer which needs to reach a particular size before the acknowledgements are flushed to the broker. Batching acknowledgements like this can drastically improve performance for certain high-volume use-cases. By default you need to acknowledge 1,048,576 bytes worth of messages in order to flush the buffer and send the acknowledgements to the broker. You can change the size of this buffer by invoking setAckBatchSize on your ServerLocator instance or by using a different createSession method (e.g. sf.createSession(true, true, myAckBatchSize)).



If the acknowledgement buffer isn't flushed and your client crashes then the corresponding messages will still be in the queue when the client comes back. If the buffer hasn't reached its threshold it will still be flushed anyway when the consumer is closed gracefully.






share|improve this answer























  • ok, Let me test this way.
    – user565
    Nov 20 '18 at 15:39










  • It is still doing same way. I think i am doing something stupid either in my Producer OR Consumer code. I have updated a question again can you please have a look and suggest me is this a proper way to use autoCommit session in both place.
    – user565
    Nov 20 '18 at 16:14












  • I have removed your producer code from the question as it's completely unrelated to the issue, and I've added some clarification to my answer.
    – Justin Bertram
    Nov 20 '18 at 18:26










  • Actually i am following an example which available in HornetQ source and for a Consumer implementation, it is not mentioned anything which use to set a MaxBatchSize after acknowledge. I am looking a documentation now to see how to set this info.
    – user565
    Nov 20 '18 at 18:29










  • I already outlined the simplest ways you can set the acknowledgement batch size. As for the example in the HornetQ source, my guess is that it's relying on the close() invocation to flush the acknowledgement buffer or it's acknowledging enough messages to flush the buffer on its own. Can you provide a link to the relevant HornetQ source?
    – Justin Bertram
    Nov 20 '18 at 18:33
















1












1








1






You are not committing the session after the acknowledgements are complete, and you are not creating the session with auto-commit for acknowledgements enabled. Therefore, you should do one of the following:




  • Either explicitly call session.commit() after one or more invocations of acknowledge()

  • Or enable implicit auto-commit for acknowledgements by creating the session using sf.createSession(true,true) or sf.createSession(false,true) (the boolean which controls auto-commit for acknowledgements is the second one).


Keep in mind that when you enable auto-commit for acknowledgements there is an internal buffer which needs to reach a particular size before the acknowledgements are flushed to the broker. Batching acknowledgements like this can drastically improve performance for certain high-volume use-cases. By default you need to acknowledge 1,048,576 bytes worth of messages in order to flush the buffer and send the acknowledgements to the broker. You can change the size of this buffer by invoking setAckBatchSize on your ServerLocator instance or by using a different createSession method (e.g. sf.createSession(true, true, myAckBatchSize)).



If the acknowledgement buffer isn't flushed and your client crashes then the corresponding messages will still be in the queue when the client comes back. If the buffer hasn't reached its threshold it will still be flushed anyway when the consumer is closed gracefully.






share|improve this answer














You are not committing the session after the acknowledgements are complete, and you are not creating the session with auto-commit for acknowledgements enabled. Therefore, you should do one of the following:




  • Either explicitly call session.commit() after one or more invocations of acknowledge()

  • Or enable implicit auto-commit for acknowledgements by creating the session using sf.createSession(true,true) or sf.createSession(false,true) (the boolean which controls auto-commit for acknowledgements is the second one).


Keep in mind that when you enable auto-commit for acknowledgements there is an internal buffer which needs to reach a particular size before the acknowledgements are flushed to the broker. Batching acknowledgements like this can drastically improve performance for certain high-volume use-cases. By default you need to acknowledge 1,048,576 bytes worth of messages in order to flush the buffer and send the acknowledgements to the broker. You can change the size of this buffer by invoking setAckBatchSize on your ServerLocator instance or by using a different createSession method (e.g. sf.createSession(true, true, myAckBatchSize)).



If the acknowledgement buffer isn't flushed and your client crashes then the corresponding messages will still be in the queue when the client comes back. If the buffer hasn't reached its threshold it will still be flushed anyway when the consumer is closed gracefully.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 20 '18 at 18:25

























answered Nov 20 '18 at 15:33









Justin Bertram

3,0371316




3,0371316












  • ok, Let me test this way.
    – user565
    Nov 20 '18 at 15:39










  • It is still doing same way. I think i am doing something stupid either in my Producer OR Consumer code. I have updated a question again can you please have a look and suggest me is this a proper way to use autoCommit session in both place.
    – user565
    Nov 20 '18 at 16:14












  • I have removed your producer code from the question as it's completely unrelated to the issue, and I've added some clarification to my answer.
    – Justin Bertram
    Nov 20 '18 at 18:26










  • Actually i am following an example which available in HornetQ source and for a Consumer implementation, it is not mentioned anything which use to set a MaxBatchSize after acknowledge. I am looking a documentation now to see how to set this info.
    – user565
    Nov 20 '18 at 18:29










  • I already outlined the simplest ways you can set the acknowledgement batch size. As for the example in the HornetQ source, my guess is that it's relying on the close() invocation to flush the acknowledgement buffer or it's acknowledging enough messages to flush the buffer on its own. Can you provide a link to the relevant HornetQ source?
    – Justin Bertram
    Nov 20 '18 at 18:33




















  • ok, Let me test this way.
    – user565
    Nov 20 '18 at 15:39










  • It is still doing same way. I think i am doing something stupid either in my Producer OR Consumer code. I have updated a question again can you please have a look and suggest me is this a proper way to use autoCommit session in both place.
    – user565
    Nov 20 '18 at 16:14












  • I have removed your producer code from the question as it's completely unrelated to the issue, and I've added some clarification to my answer.
    – Justin Bertram
    Nov 20 '18 at 18:26










  • Actually i am following an example which available in HornetQ source and for a Consumer implementation, it is not mentioned anything which use to set a MaxBatchSize after acknowledge. I am looking a documentation now to see how to set this info.
    – user565
    Nov 20 '18 at 18:29










  • I already outlined the simplest ways you can set the acknowledgement batch size. As for the example in the HornetQ source, my guess is that it's relying on the close() invocation to flush the acknowledgement buffer or it's acknowledging enough messages to flush the buffer on its own. Can you provide a link to the relevant HornetQ source?
    – Justin Bertram
    Nov 20 '18 at 18:33


















ok, Let me test this way.
– user565
Nov 20 '18 at 15:39




ok, Let me test this way.
– user565
Nov 20 '18 at 15:39












It is still doing same way. I think i am doing something stupid either in my Producer OR Consumer code. I have updated a question again can you please have a look and suggest me is this a proper way to use autoCommit session in both place.
– user565
Nov 20 '18 at 16:14






It is still doing same way. I think i am doing something stupid either in my Producer OR Consumer code. I have updated a question again can you please have a look and suggest me is this a proper way to use autoCommit session in both place.
– user565
Nov 20 '18 at 16:14














I have removed your producer code from the question as it's completely unrelated to the issue, and I've added some clarification to my answer.
– Justin Bertram
Nov 20 '18 at 18:26




I have removed your producer code from the question as it's completely unrelated to the issue, and I've added some clarification to my answer.
– Justin Bertram
Nov 20 '18 at 18:26












Actually i am following an example which available in HornetQ source and for a Consumer implementation, it is not mentioned anything which use to set a MaxBatchSize after acknowledge. I am looking a documentation now to see how to set this info.
– user565
Nov 20 '18 at 18:29




Actually i am following an example which available in HornetQ source and for a Consumer implementation, it is not mentioned anything which use to set a MaxBatchSize after acknowledge. I am looking a documentation now to see how to set this info.
– user565
Nov 20 '18 at 18:29












I already outlined the simplest ways you can set the acknowledgement batch size. As for the example in the HornetQ source, my guess is that it's relying on the close() invocation to flush the acknowledgement buffer or it's acknowledging enough messages to flush the buffer on its own. Can you provide a link to the relevant HornetQ source?
– Justin Bertram
Nov 20 '18 at 18:33






I already outlined the simplest ways you can set the acknowledgement batch size. As for the example in the HornetQ source, my guess is that it's relying on the close() invocation to flush the acknowledgement buffer or it's acknowledging enough messages to flush the buffer on its own. Can you provide a link to the relevant HornetQ source?
– Justin Bertram
Nov 20 '18 at 18:33




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53376762%2facknowledged-messages-are-redelivered-when-crashed-core-client-reconnects%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

How to fix TextFormField cause rebuild widget in Flutter

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith