Any concurrent.futures timeout that actually works?












4















Tried to write a process-based timeout (sync) on the cheap, like this:



from concurrent.futures import ProcessPoolExecutor

def call_with_timeout(func, *args, timeout=3):
with ProcessPoolExecutor(max_workers=1) as pool:
future = pool.submit(func, *args)
result = future.result(timeout=timeout)


But it seems the timeout argument passed to future.result doesn't really work as advertised.



>>> t0 = time.time()
... call_with_timeout(time.sleep, 2, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
wall time: 2.016767978668213


OK.



>>> t0 = time.time()
... call_with_timeout(time.sleep, 5, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
# TimeoutError


Not OK - unblocked after 5 seconds, not 3 seconds.



Related questions show how to do this with thread pools, or with signal. How to timeout a process submitted to a pool after n seconds, without using any _private API of multiprocessing? Hard kill is fine, no need to request a clean shutdown.










share|improve this question























  • I'm seeing the expected timeout length in preliminary tests.

    – user2357112
    Jan 2 at 18:54











  • @user2357112 Interesting - time.sleep was just chosen as an MCVE and might have platform-dependent quirks - could you try it with a busy-looping function, or something dumb like 10**100000000?

    – wim
    Jan 2 at 18:56













  • ...oh, I think I see. I think cleaning up the pool is blocking.

    – user2357112
    Jan 2 at 19:04











  • I see the 5 second wall time for the second test on Ubuntu. What OS are you using?

    – John Anderson
    Jan 2 at 19:04
















4















Tried to write a process-based timeout (sync) on the cheap, like this:



from concurrent.futures import ProcessPoolExecutor

def call_with_timeout(func, *args, timeout=3):
with ProcessPoolExecutor(max_workers=1) as pool:
future = pool.submit(func, *args)
result = future.result(timeout=timeout)


But it seems the timeout argument passed to future.result doesn't really work as advertised.



>>> t0 = time.time()
... call_with_timeout(time.sleep, 2, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
wall time: 2.016767978668213


OK.



>>> t0 = time.time()
... call_with_timeout(time.sleep, 5, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
# TimeoutError


Not OK - unblocked after 5 seconds, not 3 seconds.



Related questions show how to do this with thread pools, or with signal. How to timeout a process submitted to a pool after n seconds, without using any _private API of multiprocessing? Hard kill is fine, no need to request a clean shutdown.










share|improve this question























  • I'm seeing the expected timeout length in preliminary tests.

    – user2357112
    Jan 2 at 18:54











  • @user2357112 Interesting - time.sleep was just chosen as an MCVE and might have platform-dependent quirks - could you try it with a busy-looping function, or something dumb like 10**100000000?

    – wim
    Jan 2 at 18:56













  • ...oh, I think I see. I think cleaning up the pool is blocking.

    – user2357112
    Jan 2 at 19:04











  • I see the 5 second wall time for the second test on Ubuntu. What OS are you using?

    – John Anderson
    Jan 2 at 19:04














4












4








4








Tried to write a process-based timeout (sync) on the cheap, like this:



from concurrent.futures import ProcessPoolExecutor

def call_with_timeout(func, *args, timeout=3):
with ProcessPoolExecutor(max_workers=1) as pool:
future = pool.submit(func, *args)
result = future.result(timeout=timeout)


But it seems the timeout argument passed to future.result doesn't really work as advertised.



>>> t0 = time.time()
... call_with_timeout(time.sleep, 2, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
wall time: 2.016767978668213


OK.



>>> t0 = time.time()
... call_with_timeout(time.sleep, 5, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
# TimeoutError


Not OK - unblocked after 5 seconds, not 3 seconds.



Related questions show how to do this with thread pools, or with signal. How to timeout a process submitted to a pool after n seconds, without using any _private API of multiprocessing? Hard kill is fine, no need to request a clean shutdown.










share|improve this question














Tried to write a process-based timeout (sync) on the cheap, like this:



from concurrent.futures import ProcessPoolExecutor

def call_with_timeout(func, *args, timeout=3):
with ProcessPoolExecutor(max_workers=1) as pool:
future = pool.submit(func, *args)
result = future.result(timeout=timeout)


But it seems the timeout argument passed to future.result doesn't really work as advertised.



>>> t0 = time.time()
... call_with_timeout(time.sleep, 2, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
wall time: 2.016767978668213


OK.



>>> t0 = time.time()
... call_with_timeout(time.sleep, 5, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
# TimeoutError


Not OK - unblocked after 5 seconds, not 3 seconds.



Related questions show how to do this with thread pools, or with signal. How to timeout a process submitted to a pool after n seconds, without using any _private API of multiprocessing? Hard kill is fine, no need to request a clean shutdown.







python multiprocessing timeout concurrent.futures






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jan 2 at 18:43









wimwim

166k53316448




166k53316448













  • I'm seeing the expected timeout length in preliminary tests.

    – user2357112
    Jan 2 at 18:54











  • @user2357112 Interesting - time.sleep was just chosen as an MCVE and might have platform-dependent quirks - could you try it with a busy-looping function, or something dumb like 10**100000000?

    – wim
    Jan 2 at 18:56













  • ...oh, I think I see. I think cleaning up the pool is blocking.

    – user2357112
    Jan 2 at 19:04











  • I see the 5 second wall time for the second test on Ubuntu. What OS are you using?

    – John Anderson
    Jan 2 at 19:04



















  • I'm seeing the expected timeout length in preliminary tests.

    – user2357112
    Jan 2 at 18:54











  • @user2357112 Interesting - time.sleep was just chosen as an MCVE and might have platform-dependent quirks - could you try it with a busy-looping function, or something dumb like 10**100000000?

    – wim
    Jan 2 at 18:56













  • ...oh, I think I see. I think cleaning up the pool is blocking.

    – user2357112
    Jan 2 at 19:04











  • I see the 5 second wall time for the second test on Ubuntu. What OS are you using?

    – John Anderson
    Jan 2 at 19:04

















I'm seeing the expected timeout length in preliminary tests.

– user2357112
Jan 2 at 18:54





I'm seeing the expected timeout length in preliminary tests.

– user2357112
Jan 2 at 18:54













@user2357112 Interesting - time.sleep was just chosen as an MCVE and might have platform-dependent quirks - could you try it with a busy-looping function, or something dumb like 10**100000000?

– wim
Jan 2 at 18:56







@user2357112 Interesting - time.sleep was just chosen as an MCVE and might have platform-dependent quirks - could you try it with a busy-looping function, or something dumb like 10**100000000?

– wim
Jan 2 at 18:56















...oh, I think I see. I think cleaning up the pool is blocking.

– user2357112
Jan 2 at 19:04





...oh, I think I see. I think cleaning up the pool is blocking.

– user2357112
Jan 2 at 19:04













I see the 5 second wall time for the second test on Ubuntu. What OS are you using?

– John Anderson
Jan 2 at 19:04





I see the 5 second wall time for the second test on Ubuntu. What OS are you using?

– John Anderson
Jan 2 at 19:04












2 Answers
2






active

oldest

votes


















1














You might want to take a look at pebble.



Its ProcessPool was designed to solve this exact issue: enable timeout and cancellation of running tasks without the need of shutting down the entire pool.



When a future times out or is cancelled, the worker gets actually terminated effectively stopping the execution of the scheduled function.



Timeout:



pool = pebble.ProcessPool(max_workers=1)
future = pool.schedule(func, args=args, timeout=1)
try:
future.result()
except TimeoutError:
print("Timeout")


Example:



def call_with_timeout(func, *args, timeout=3):
pool = pebble.ProcessPool(max_workers=1)
with pool:
future = pool.schedule(func, args=args, timeout=timeout)
return future.result()





share|improve this answer


























  • I was about to add examples myself. You were indeed faster :)

    – noxdafox
    Jan 2 at 19:28











  • Yeah, I actually had something going with pebble already. I was kinda hoping there was some api in stdlib... but +1 anyway

    – wim
    Jan 2 at 19:30











  • I built pebble exactly because of that. The stdlib Pool implementations concurrent.futures and multiprocessing are all a bit too optimistic.

    – noxdafox
    Jan 2 at 19:30






  • 1





    Ah, I hadn't realised you're the pebble author. Thanks for your work!

    – wim
    Jan 2 at 19:32



















1














The timeout is behaving as it should. future.result(timeout=timeout) stops after the given timeout. Shutting down the pool still waits for all pending futures to finish executing, which causes the unexpected delay.



You can make the shutdown happen in the background by calling shutdown(wait=False), but the overall Python program won't end until all pending futures finish executing anyway:



def call_with_timeout(func, *args, timeout=3):
pool = ProcessPoolExecutor(max_workers=1)
try:
future = pool.submit(func, *args)
result = future.result(timeout=timeout)
finally:
pool.shutdown(wait=False)


The Executor API offers no way to cancel a call that's already executing. future.cancel() can only cancel calls that haven't started yet. If you want abrupt abort functionality, you should probably use something other than concurrent.futures.ProcessPoolExecutor.






share|improve this answer


























  • Yes, but I don't want to wait for pending futures to finish executing. Just want them killed (which is why using a subprocess and not a worker thread in the first place).

    – wim
    Jan 2 at 19:11











  • @wim: Answer expanded.

    – user2357112
    Jan 2 at 19:12











  • So is the answer essentially "there is no high-level API to do it"? Perhaps this is because concurrent.futures/multiprocessing must also work on Windows where SIGKILL is not necessarily available...

    – wim
    Jan 2 at 19:15












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54011552%2fany-concurrent-futures-timeout-that-actually-works%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














You might want to take a look at pebble.



Its ProcessPool was designed to solve this exact issue: enable timeout and cancellation of running tasks without the need of shutting down the entire pool.



When a future times out or is cancelled, the worker gets actually terminated effectively stopping the execution of the scheduled function.



Timeout:



pool = pebble.ProcessPool(max_workers=1)
future = pool.schedule(func, args=args, timeout=1)
try:
future.result()
except TimeoutError:
print("Timeout")


Example:



def call_with_timeout(func, *args, timeout=3):
pool = pebble.ProcessPool(max_workers=1)
with pool:
future = pool.schedule(func, args=args, timeout=timeout)
return future.result()





share|improve this answer


























  • I was about to add examples myself. You were indeed faster :)

    – noxdafox
    Jan 2 at 19:28











  • Yeah, I actually had something going with pebble already. I was kinda hoping there was some api in stdlib... but +1 anyway

    – wim
    Jan 2 at 19:30











  • I built pebble exactly because of that. The stdlib Pool implementations concurrent.futures and multiprocessing are all a bit too optimistic.

    – noxdafox
    Jan 2 at 19:30






  • 1





    Ah, I hadn't realised you're the pebble author. Thanks for your work!

    – wim
    Jan 2 at 19:32
















1














You might want to take a look at pebble.



Its ProcessPool was designed to solve this exact issue: enable timeout and cancellation of running tasks without the need of shutting down the entire pool.



When a future times out or is cancelled, the worker gets actually terminated effectively stopping the execution of the scheduled function.



Timeout:



pool = pebble.ProcessPool(max_workers=1)
future = pool.schedule(func, args=args, timeout=1)
try:
future.result()
except TimeoutError:
print("Timeout")


Example:



def call_with_timeout(func, *args, timeout=3):
pool = pebble.ProcessPool(max_workers=1)
with pool:
future = pool.schedule(func, args=args, timeout=timeout)
return future.result()





share|improve this answer


























  • I was about to add examples myself. You were indeed faster :)

    – noxdafox
    Jan 2 at 19:28











  • Yeah, I actually had something going with pebble already. I was kinda hoping there was some api in stdlib... but +1 anyway

    – wim
    Jan 2 at 19:30











  • I built pebble exactly because of that. The stdlib Pool implementations concurrent.futures and multiprocessing are all a bit too optimistic.

    – noxdafox
    Jan 2 at 19:30






  • 1





    Ah, I hadn't realised you're the pebble author. Thanks for your work!

    – wim
    Jan 2 at 19:32














1












1








1







You might want to take a look at pebble.



Its ProcessPool was designed to solve this exact issue: enable timeout and cancellation of running tasks without the need of shutting down the entire pool.



When a future times out or is cancelled, the worker gets actually terminated effectively stopping the execution of the scheduled function.



Timeout:



pool = pebble.ProcessPool(max_workers=1)
future = pool.schedule(func, args=args, timeout=1)
try:
future.result()
except TimeoutError:
print("Timeout")


Example:



def call_with_timeout(func, *args, timeout=3):
pool = pebble.ProcessPool(max_workers=1)
with pool:
future = pool.schedule(func, args=args, timeout=timeout)
return future.result()





share|improve this answer















You might want to take a look at pebble.



Its ProcessPool was designed to solve this exact issue: enable timeout and cancellation of running tasks without the need of shutting down the entire pool.



When a future times out or is cancelled, the worker gets actually terminated effectively stopping the execution of the scheduled function.



Timeout:



pool = pebble.ProcessPool(max_workers=1)
future = pool.schedule(func, args=args, timeout=1)
try:
future.result()
except TimeoutError:
print("Timeout")


Example:



def call_with_timeout(func, *args, timeout=3):
pool = pebble.ProcessPool(max_workers=1)
with pool:
future = pool.schedule(func, args=args, timeout=timeout)
return future.result()






share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 17 at 22:41









wim

166k53316448




166k53316448










answered Jan 2 at 19:25









noxdafoxnoxdafox

7,50121324




7,50121324













  • I was about to add examples myself. You were indeed faster :)

    – noxdafox
    Jan 2 at 19:28











  • Yeah, I actually had something going with pebble already. I was kinda hoping there was some api in stdlib... but +1 anyway

    – wim
    Jan 2 at 19:30











  • I built pebble exactly because of that. The stdlib Pool implementations concurrent.futures and multiprocessing are all a bit too optimistic.

    – noxdafox
    Jan 2 at 19:30






  • 1





    Ah, I hadn't realised you're the pebble author. Thanks for your work!

    – wim
    Jan 2 at 19:32



















  • I was about to add examples myself. You were indeed faster :)

    – noxdafox
    Jan 2 at 19:28











  • Yeah, I actually had something going with pebble already. I was kinda hoping there was some api in stdlib... but +1 anyway

    – wim
    Jan 2 at 19:30











  • I built pebble exactly because of that. The stdlib Pool implementations concurrent.futures and multiprocessing are all a bit too optimistic.

    – noxdafox
    Jan 2 at 19:30






  • 1





    Ah, I hadn't realised you're the pebble author. Thanks for your work!

    – wim
    Jan 2 at 19:32

















I was about to add examples myself. You were indeed faster :)

– noxdafox
Jan 2 at 19:28





I was about to add examples myself. You were indeed faster :)

– noxdafox
Jan 2 at 19:28













Yeah, I actually had something going with pebble already. I was kinda hoping there was some api in stdlib... but +1 anyway

– wim
Jan 2 at 19:30





Yeah, I actually had something going with pebble already. I was kinda hoping there was some api in stdlib... but +1 anyway

– wim
Jan 2 at 19:30













I built pebble exactly because of that. The stdlib Pool implementations concurrent.futures and multiprocessing are all a bit too optimistic.

– noxdafox
Jan 2 at 19:30





I built pebble exactly because of that. The stdlib Pool implementations concurrent.futures and multiprocessing are all a bit too optimistic.

– noxdafox
Jan 2 at 19:30




1




1





Ah, I hadn't realised you're the pebble author. Thanks for your work!

– wim
Jan 2 at 19:32





Ah, I hadn't realised you're the pebble author. Thanks for your work!

– wim
Jan 2 at 19:32













1














The timeout is behaving as it should. future.result(timeout=timeout) stops after the given timeout. Shutting down the pool still waits for all pending futures to finish executing, which causes the unexpected delay.



You can make the shutdown happen in the background by calling shutdown(wait=False), but the overall Python program won't end until all pending futures finish executing anyway:



def call_with_timeout(func, *args, timeout=3):
pool = ProcessPoolExecutor(max_workers=1)
try:
future = pool.submit(func, *args)
result = future.result(timeout=timeout)
finally:
pool.shutdown(wait=False)


The Executor API offers no way to cancel a call that's already executing. future.cancel() can only cancel calls that haven't started yet. If you want abrupt abort functionality, you should probably use something other than concurrent.futures.ProcessPoolExecutor.






share|improve this answer


























  • Yes, but I don't want to wait for pending futures to finish executing. Just want them killed (which is why using a subprocess and not a worker thread in the first place).

    – wim
    Jan 2 at 19:11











  • @wim: Answer expanded.

    – user2357112
    Jan 2 at 19:12











  • So is the answer essentially "there is no high-level API to do it"? Perhaps this is because concurrent.futures/multiprocessing must also work on Windows where SIGKILL is not necessarily available...

    – wim
    Jan 2 at 19:15
















1














The timeout is behaving as it should. future.result(timeout=timeout) stops after the given timeout. Shutting down the pool still waits for all pending futures to finish executing, which causes the unexpected delay.



You can make the shutdown happen in the background by calling shutdown(wait=False), but the overall Python program won't end until all pending futures finish executing anyway:



def call_with_timeout(func, *args, timeout=3):
pool = ProcessPoolExecutor(max_workers=1)
try:
future = pool.submit(func, *args)
result = future.result(timeout=timeout)
finally:
pool.shutdown(wait=False)


The Executor API offers no way to cancel a call that's already executing. future.cancel() can only cancel calls that haven't started yet. If you want abrupt abort functionality, you should probably use something other than concurrent.futures.ProcessPoolExecutor.






share|improve this answer


























  • Yes, but I don't want to wait for pending futures to finish executing. Just want them killed (which is why using a subprocess and not a worker thread in the first place).

    – wim
    Jan 2 at 19:11











  • @wim: Answer expanded.

    – user2357112
    Jan 2 at 19:12











  • So is the answer essentially "there is no high-level API to do it"? Perhaps this is because concurrent.futures/multiprocessing must also work on Windows where SIGKILL is not necessarily available...

    – wim
    Jan 2 at 19:15














1












1








1







The timeout is behaving as it should. future.result(timeout=timeout) stops after the given timeout. Shutting down the pool still waits for all pending futures to finish executing, which causes the unexpected delay.



You can make the shutdown happen in the background by calling shutdown(wait=False), but the overall Python program won't end until all pending futures finish executing anyway:



def call_with_timeout(func, *args, timeout=3):
pool = ProcessPoolExecutor(max_workers=1)
try:
future = pool.submit(func, *args)
result = future.result(timeout=timeout)
finally:
pool.shutdown(wait=False)


The Executor API offers no way to cancel a call that's already executing. future.cancel() can only cancel calls that haven't started yet. If you want abrupt abort functionality, you should probably use something other than concurrent.futures.ProcessPoolExecutor.






share|improve this answer















The timeout is behaving as it should. future.result(timeout=timeout) stops after the given timeout. Shutting down the pool still waits for all pending futures to finish executing, which causes the unexpected delay.



You can make the shutdown happen in the background by calling shutdown(wait=False), but the overall Python program won't end until all pending futures finish executing anyway:



def call_with_timeout(func, *args, timeout=3):
pool = ProcessPoolExecutor(max_workers=1)
try:
future = pool.submit(func, *args)
result = future.result(timeout=timeout)
finally:
pool.shutdown(wait=False)


The Executor API offers no way to cancel a call that's already executing. future.cancel() can only cancel calls that haven't started yet. If you want abrupt abort functionality, you should probably use something other than concurrent.futures.ProcessPoolExecutor.







share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 2 at 19:12

























answered Jan 2 at 19:06









user2357112user2357112

157k13173267




157k13173267













  • Yes, but I don't want to wait for pending futures to finish executing. Just want them killed (which is why using a subprocess and not a worker thread in the first place).

    – wim
    Jan 2 at 19:11











  • @wim: Answer expanded.

    – user2357112
    Jan 2 at 19:12











  • So is the answer essentially "there is no high-level API to do it"? Perhaps this is because concurrent.futures/multiprocessing must also work on Windows where SIGKILL is not necessarily available...

    – wim
    Jan 2 at 19:15



















  • Yes, but I don't want to wait for pending futures to finish executing. Just want them killed (which is why using a subprocess and not a worker thread in the first place).

    – wim
    Jan 2 at 19:11











  • @wim: Answer expanded.

    – user2357112
    Jan 2 at 19:12











  • So is the answer essentially "there is no high-level API to do it"? Perhaps this is because concurrent.futures/multiprocessing must also work on Windows where SIGKILL is not necessarily available...

    – wim
    Jan 2 at 19:15

















Yes, but I don't want to wait for pending futures to finish executing. Just want them killed (which is why using a subprocess and not a worker thread in the first place).

– wim
Jan 2 at 19:11





Yes, but I don't want to wait for pending futures to finish executing. Just want them killed (which is why using a subprocess and not a worker thread in the first place).

– wim
Jan 2 at 19:11













@wim: Answer expanded.

– user2357112
Jan 2 at 19:12





@wim: Answer expanded.

– user2357112
Jan 2 at 19:12













So is the answer essentially "there is no high-level API to do it"? Perhaps this is because concurrent.futures/multiprocessing must also work on Windows where SIGKILL is not necessarily available...

– wim
Jan 2 at 19:15





So is the answer essentially "there is no high-level API to do it"? Perhaps this is because concurrent.futures/multiprocessing must also work on Windows where SIGKILL is not necessarily available...

– wim
Jan 2 at 19:15


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54011552%2fany-concurrent-futures-timeout-that-actually-works%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

How to fix TextFormField cause rebuild widget in Flutter