Does the synchronized construct in Java use internally (and somehow) the hardware primitive CAS operation?
I am having a hard time understanding what is the hardware support for the synchronized
statement and associated notify()
, notifyAll()
and wait()
methods, present on every java object.
I have read and know how to use this constructs but I have always assumed that they were mapped directly to hardware primitives.
As I delve further on books about concurrency, I only read about the compare-and-swap (CAS) operation being directly provided by hardware.
It seems as though this constructs are created/maintained by the JVM itself. If my reading is correct, each object contains some state with information about the thread accessing it. This being used to define the monitor of that object and coordinate the access for multiple threads to that object.
But if that is the case, how is this state itself managed from concurrent access ? It must surely be managed, correct ? Is it with CAS ?
If it is with CAS, that means there is only one real form of synchronization, CAS. All others being derivatives. Why then was this monitor construct with the associated synchronized
, notify()
, notifyAll()
, wait()
methods developed given that the Atomic variables (i.e. CAS) are better in terms of performance and also wait-free ?
I am aware that Atomic variables to user classes appeared only after Java 5.0 or so, but before that Java already this monitors/intrinsic locks. How were they implemented ?
java scala concurrency synchronization java-memory-model
add a comment |
I am having a hard time understanding what is the hardware support for the synchronized
statement and associated notify()
, notifyAll()
and wait()
methods, present on every java object.
I have read and know how to use this constructs but I have always assumed that they were mapped directly to hardware primitives.
As I delve further on books about concurrency, I only read about the compare-and-swap (CAS) operation being directly provided by hardware.
It seems as though this constructs are created/maintained by the JVM itself. If my reading is correct, each object contains some state with information about the thread accessing it. This being used to define the monitor of that object and coordinate the access for multiple threads to that object.
But if that is the case, how is this state itself managed from concurrent access ? It must surely be managed, correct ? Is it with CAS ?
If it is with CAS, that means there is only one real form of synchronization, CAS. All others being derivatives. Why then was this monitor construct with the associated synchronized
, notify()
, notifyAll()
, wait()
methods developed given that the Atomic variables (i.e. CAS) are better in terms of performance and also wait-free ?
I am aware that Atomic variables to user classes appeared only after Java 5.0 or so, but before that Java already this monitors/intrinsic locks. How were they implemented ?
java scala concurrency synchronization java-memory-model
stackoverflow.com/questions/1485924/how-are-mutexes-implemented may be helpful.
– Alexey Romanov
Nov 21 '18 at 7:04
I fear this cannot be answered comprehensively - this strictly depends on the runtime implementation for the specific platform. There may even be implementations for platforms that don't provide any hardware support for such operations. On such a platform, the runtime would have to emulate these things on application level.
– Hulk
Nov 23 '18 at 8:30
add a comment |
I am having a hard time understanding what is the hardware support for the synchronized
statement and associated notify()
, notifyAll()
and wait()
methods, present on every java object.
I have read and know how to use this constructs but I have always assumed that they were mapped directly to hardware primitives.
As I delve further on books about concurrency, I only read about the compare-and-swap (CAS) operation being directly provided by hardware.
It seems as though this constructs are created/maintained by the JVM itself. If my reading is correct, each object contains some state with information about the thread accessing it. This being used to define the monitor of that object and coordinate the access for multiple threads to that object.
But if that is the case, how is this state itself managed from concurrent access ? It must surely be managed, correct ? Is it with CAS ?
If it is with CAS, that means there is only one real form of synchronization, CAS. All others being derivatives. Why then was this monitor construct with the associated synchronized
, notify()
, notifyAll()
, wait()
methods developed given that the Atomic variables (i.e. CAS) are better in terms of performance and also wait-free ?
I am aware that Atomic variables to user classes appeared only after Java 5.0 or so, but before that Java already this monitors/intrinsic locks. How were they implemented ?
java scala concurrency synchronization java-memory-model
I am having a hard time understanding what is the hardware support for the synchronized
statement and associated notify()
, notifyAll()
and wait()
methods, present on every java object.
I have read and know how to use this constructs but I have always assumed that they were mapped directly to hardware primitives.
As I delve further on books about concurrency, I only read about the compare-and-swap (CAS) operation being directly provided by hardware.
It seems as though this constructs are created/maintained by the JVM itself. If my reading is correct, each object contains some state with information about the thread accessing it. This being used to define the monitor of that object and coordinate the access for multiple threads to that object.
But if that is the case, how is this state itself managed from concurrent access ? It must surely be managed, correct ? Is it with CAS ?
If it is with CAS, that means there is only one real form of synchronization, CAS. All others being derivatives. Why then was this monitor construct with the associated synchronized
, notify()
, notifyAll()
, wait()
methods developed given that the Atomic variables (i.e. CAS) are better in terms of performance and also wait-free ?
I am aware that Atomic variables to user classes appeared only after Java 5.0 or so, but before that Java already this monitors/intrinsic locks. How were they implemented ?
java scala concurrency synchronization java-memory-model
java scala concurrency synchronization java-memory-model
edited Nov 21 '18 at 16:37
Lasf
1,292822
1,292822
asked Nov 20 '18 at 20:20
Carlos TeixeiraCarlos Teixeira
12918
12918
stackoverflow.com/questions/1485924/how-are-mutexes-implemented may be helpful.
– Alexey Romanov
Nov 21 '18 at 7:04
I fear this cannot be answered comprehensively - this strictly depends on the runtime implementation for the specific platform. There may even be implementations for platforms that don't provide any hardware support for such operations. On such a platform, the runtime would have to emulate these things on application level.
– Hulk
Nov 23 '18 at 8:30
add a comment |
stackoverflow.com/questions/1485924/how-are-mutexes-implemented may be helpful.
– Alexey Romanov
Nov 21 '18 at 7:04
I fear this cannot be answered comprehensively - this strictly depends on the runtime implementation for the specific platform. There may even be implementations for platforms that don't provide any hardware support for such operations. On such a platform, the runtime would have to emulate these things on application level.
– Hulk
Nov 23 '18 at 8:30
stackoverflow.com/questions/1485924/how-are-mutexes-implemented may be helpful.
– Alexey Romanov
Nov 21 '18 at 7:04
stackoverflow.com/questions/1485924/how-are-mutexes-implemented may be helpful.
– Alexey Romanov
Nov 21 '18 at 7:04
I fear this cannot be answered comprehensively - this strictly depends on the runtime implementation for the specific platform. There may even be implementations for platforms that don't provide any hardware support for such operations. On such a platform, the runtime would have to emulate these things on application level.
– Hulk
Nov 23 '18 at 8:30
I fear this cannot be answered comprehensively - this strictly depends on the runtime implementation for the specific platform. There may even be implementations for platforms that don't provide any hardware support for such operations. On such a platform, the runtime would have to emulate these things on application level.
– Hulk
Nov 23 '18 at 8:30
add a comment |
1 Answer
1
active
oldest
votes
Settle in kids this is going to be a long one.
First lest discuss CAS (Compare And Swap) this is not a synchronization mechanism. It is a atomic operation that allows us to update a value in main memory, simultaneity testing if that value has not changed (or is what we expect it to be). There is no locking involved. Although they are used by some synchronization primitives (semaphores, mutexes). Lest's take a look on the following example:
a = 1;
--------------------------------
Thread 1 | Thread 2
b = 1 + a | b = 2 + a
cas(*a, 1, b ) | cas(*a, 1, b )
Now one of the CAS-es will fail, and what I mean by that is that it will return false. The other will return true and the value that pointer *a represents will be updated with new value. If we didn't use CAS but instead just updated the value, like this:
a = 1;
--------------------------------
Thread 1 | Thread 2
b = 1 + a | b = 2 + a
a = b | a = b
At the end of this computation the a could be 2 or 3 and both threads would complete happily not knowing what value was saved in a. This is what is called a data race and CAS is a way to solved that.
The existence of CAS enable us to write some lock-free algorithms (no synchronization needed) like collections in the java.util.concurrent package, that do not need to be synchronized, to be accessed concurrently.
Now I mentioned that CAS is used to implement synchronization. That is why the cost of acquiring a lock and perform a CAS is almost the same (if there is no contention !!!!) And in that send you get hardware support for the synchronized key word.
synchronized(this){
n = n + 1;
}
AtomicLong al = new AtomicLong();
al.updateAndGet( n -> n + 1)
The performance hit that you might get when using CAS vs synchronize comes from When your CAS fails you can just retry while usage of synchronize might result in thread going to sleep to the os. Going in to the rabbit hole of context switches (that might or might not happen :) depending on the os).
Now for the notify(), notifyAll() and wait()
. Calls directly to the thread scheduler that is part of the OS. The scheduler has two queues Wait Queue and Run Queue. When you invoke the wait on the thread, that thread is placed in the wq and sit's there until it get's notify and place in the rq for to be executed as soon as possible.
In Java there are two basic thread synchronization one via (wait(), notify()) is called cooperation and other via locks called mutual exclusion (mutex). And this are generally to parallel tracks to do thinks at once.
Now I don't know how the synchronization was done before Java 5. But now you have 2 ways to synchronize using object (one of the might be old the other new).
Biased Lock. Thread id is put in object header and then when that same specific thread wants to lock, unlock that object that operation cost us nothing. This is why if our app has a lot of uncontended locks this can give us significant performance boost. As we can avoid a second path:
(this is probably the old one) using
monitorenter/monitorexit
. This are bytecode instructions. That are placed on entry and exit of thesynchronize {...}
statement. This is where the object identity becomes relevant. As it becomes part of the lock information.
OK, that it. I know i didn't answer the question fully. The subject is so complicated and so difficult. The chapter 17 in "Java Language Specification" titled: "Java Memory Model" is probably the only one that can't be read by regular programmers (Maybe the dynamic dispatch also falls under that category :)). My hope is that at least you will be able to google the correct words.
Couple of links:
https://www.artima.com/insidejvm/ed2/threadsynchP.html (monitorenter/monitorexit, explanation)
https://www.ibm.com/developerworks/library/j-jtp10185/index.html (how lock are optimized inside jvm)
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53400931%2fdoes-the-synchronized-construct-in-java-use-internally-and-somehow-the-hardwar%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Settle in kids this is going to be a long one.
First lest discuss CAS (Compare And Swap) this is not a synchronization mechanism. It is a atomic operation that allows us to update a value in main memory, simultaneity testing if that value has not changed (or is what we expect it to be). There is no locking involved. Although they are used by some synchronization primitives (semaphores, mutexes). Lest's take a look on the following example:
a = 1;
--------------------------------
Thread 1 | Thread 2
b = 1 + a | b = 2 + a
cas(*a, 1, b ) | cas(*a, 1, b )
Now one of the CAS-es will fail, and what I mean by that is that it will return false. The other will return true and the value that pointer *a represents will be updated with new value. If we didn't use CAS but instead just updated the value, like this:
a = 1;
--------------------------------
Thread 1 | Thread 2
b = 1 + a | b = 2 + a
a = b | a = b
At the end of this computation the a could be 2 or 3 and both threads would complete happily not knowing what value was saved in a. This is what is called a data race and CAS is a way to solved that.
The existence of CAS enable us to write some lock-free algorithms (no synchronization needed) like collections in the java.util.concurrent package, that do not need to be synchronized, to be accessed concurrently.
Now I mentioned that CAS is used to implement synchronization. That is why the cost of acquiring a lock and perform a CAS is almost the same (if there is no contention !!!!) And in that send you get hardware support for the synchronized key word.
synchronized(this){
n = n + 1;
}
AtomicLong al = new AtomicLong();
al.updateAndGet( n -> n + 1)
The performance hit that you might get when using CAS vs synchronize comes from When your CAS fails you can just retry while usage of synchronize might result in thread going to sleep to the os. Going in to the rabbit hole of context switches (that might or might not happen :) depending on the os).
Now for the notify(), notifyAll() and wait()
. Calls directly to the thread scheduler that is part of the OS. The scheduler has two queues Wait Queue and Run Queue. When you invoke the wait on the thread, that thread is placed in the wq and sit's there until it get's notify and place in the rq for to be executed as soon as possible.
In Java there are two basic thread synchronization one via (wait(), notify()) is called cooperation and other via locks called mutual exclusion (mutex). And this are generally to parallel tracks to do thinks at once.
Now I don't know how the synchronization was done before Java 5. But now you have 2 ways to synchronize using object (one of the might be old the other new).
Biased Lock. Thread id is put in object header and then when that same specific thread wants to lock, unlock that object that operation cost us nothing. This is why if our app has a lot of uncontended locks this can give us significant performance boost. As we can avoid a second path:
(this is probably the old one) using
monitorenter/monitorexit
. This are bytecode instructions. That are placed on entry and exit of thesynchronize {...}
statement. This is where the object identity becomes relevant. As it becomes part of the lock information.
OK, that it. I know i didn't answer the question fully. The subject is so complicated and so difficult. The chapter 17 in "Java Language Specification" titled: "Java Memory Model" is probably the only one that can't be read by regular programmers (Maybe the dynamic dispatch also falls under that category :)). My hope is that at least you will be able to google the correct words.
Couple of links:
https://www.artima.com/insidejvm/ed2/threadsynchP.html (monitorenter/monitorexit, explanation)
https://www.ibm.com/developerworks/library/j-jtp10185/index.html (how lock are optimized inside jvm)
add a comment |
Settle in kids this is going to be a long one.
First lest discuss CAS (Compare And Swap) this is not a synchronization mechanism. It is a atomic operation that allows us to update a value in main memory, simultaneity testing if that value has not changed (or is what we expect it to be). There is no locking involved. Although they are used by some synchronization primitives (semaphores, mutexes). Lest's take a look on the following example:
a = 1;
--------------------------------
Thread 1 | Thread 2
b = 1 + a | b = 2 + a
cas(*a, 1, b ) | cas(*a, 1, b )
Now one of the CAS-es will fail, and what I mean by that is that it will return false. The other will return true and the value that pointer *a represents will be updated with new value. If we didn't use CAS but instead just updated the value, like this:
a = 1;
--------------------------------
Thread 1 | Thread 2
b = 1 + a | b = 2 + a
a = b | a = b
At the end of this computation the a could be 2 or 3 and both threads would complete happily not knowing what value was saved in a. This is what is called a data race and CAS is a way to solved that.
The existence of CAS enable us to write some lock-free algorithms (no synchronization needed) like collections in the java.util.concurrent package, that do not need to be synchronized, to be accessed concurrently.
Now I mentioned that CAS is used to implement synchronization. That is why the cost of acquiring a lock and perform a CAS is almost the same (if there is no contention !!!!) And in that send you get hardware support for the synchronized key word.
synchronized(this){
n = n + 1;
}
AtomicLong al = new AtomicLong();
al.updateAndGet( n -> n + 1)
The performance hit that you might get when using CAS vs synchronize comes from When your CAS fails you can just retry while usage of synchronize might result in thread going to sleep to the os. Going in to the rabbit hole of context switches (that might or might not happen :) depending on the os).
Now for the notify(), notifyAll() and wait()
. Calls directly to the thread scheduler that is part of the OS. The scheduler has two queues Wait Queue and Run Queue. When you invoke the wait on the thread, that thread is placed in the wq and sit's there until it get's notify and place in the rq for to be executed as soon as possible.
In Java there are two basic thread synchronization one via (wait(), notify()) is called cooperation and other via locks called mutual exclusion (mutex). And this are generally to parallel tracks to do thinks at once.
Now I don't know how the synchronization was done before Java 5. But now you have 2 ways to synchronize using object (one of the might be old the other new).
Biased Lock. Thread id is put in object header and then when that same specific thread wants to lock, unlock that object that operation cost us nothing. This is why if our app has a lot of uncontended locks this can give us significant performance boost. As we can avoid a second path:
(this is probably the old one) using
monitorenter/monitorexit
. This are bytecode instructions. That are placed on entry and exit of thesynchronize {...}
statement. This is where the object identity becomes relevant. As it becomes part of the lock information.
OK, that it. I know i didn't answer the question fully. The subject is so complicated and so difficult. The chapter 17 in "Java Language Specification" titled: "Java Memory Model" is probably the only one that can't be read by regular programmers (Maybe the dynamic dispatch also falls under that category :)). My hope is that at least you will be able to google the correct words.
Couple of links:
https://www.artima.com/insidejvm/ed2/threadsynchP.html (monitorenter/monitorexit, explanation)
https://www.ibm.com/developerworks/library/j-jtp10185/index.html (how lock are optimized inside jvm)
add a comment |
Settle in kids this is going to be a long one.
First lest discuss CAS (Compare And Swap) this is not a synchronization mechanism. It is a atomic operation that allows us to update a value in main memory, simultaneity testing if that value has not changed (or is what we expect it to be). There is no locking involved. Although they are used by some synchronization primitives (semaphores, mutexes). Lest's take a look on the following example:
a = 1;
--------------------------------
Thread 1 | Thread 2
b = 1 + a | b = 2 + a
cas(*a, 1, b ) | cas(*a, 1, b )
Now one of the CAS-es will fail, and what I mean by that is that it will return false. The other will return true and the value that pointer *a represents will be updated with new value. If we didn't use CAS but instead just updated the value, like this:
a = 1;
--------------------------------
Thread 1 | Thread 2
b = 1 + a | b = 2 + a
a = b | a = b
At the end of this computation the a could be 2 or 3 and both threads would complete happily not knowing what value was saved in a. This is what is called a data race and CAS is a way to solved that.
The existence of CAS enable us to write some lock-free algorithms (no synchronization needed) like collections in the java.util.concurrent package, that do not need to be synchronized, to be accessed concurrently.
Now I mentioned that CAS is used to implement synchronization. That is why the cost of acquiring a lock and perform a CAS is almost the same (if there is no contention !!!!) And in that send you get hardware support for the synchronized key word.
synchronized(this){
n = n + 1;
}
AtomicLong al = new AtomicLong();
al.updateAndGet( n -> n + 1)
The performance hit that you might get when using CAS vs synchronize comes from When your CAS fails you can just retry while usage of synchronize might result in thread going to sleep to the os. Going in to the rabbit hole of context switches (that might or might not happen :) depending on the os).
Now for the notify(), notifyAll() and wait()
. Calls directly to the thread scheduler that is part of the OS. The scheduler has two queues Wait Queue and Run Queue. When you invoke the wait on the thread, that thread is placed in the wq and sit's there until it get's notify and place in the rq for to be executed as soon as possible.
In Java there are two basic thread synchronization one via (wait(), notify()) is called cooperation and other via locks called mutual exclusion (mutex). And this are generally to parallel tracks to do thinks at once.
Now I don't know how the synchronization was done before Java 5. But now you have 2 ways to synchronize using object (one of the might be old the other new).
Biased Lock. Thread id is put in object header and then when that same specific thread wants to lock, unlock that object that operation cost us nothing. This is why if our app has a lot of uncontended locks this can give us significant performance boost. As we can avoid a second path:
(this is probably the old one) using
monitorenter/monitorexit
. This are bytecode instructions. That are placed on entry and exit of thesynchronize {...}
statement. This is where the object identity becomes relevant. As it becomes part of the lock information.
OK, that it. I know i didn't answer the question fully. The subject is so complicated and so difficult. The chapter 17 in "Java Language Specification" titled: "Java Memory Model" is probably the only one that can't be read by regular programmers (Maybe the dynamic dispatch also falls under that category :)). My hope is that at least you will be able to google the correct words.
Couple of links:
https://www.artima.com/insidejvm/ed2/threadsynchP.html (monitorenter/monitorexit, explanation)
https://www.ibm.com/developerworks/library/j-jtp10185/index.html (how lock are optimized inside jvm)
Settle in kids this is going to be a long one.
First lest discuss CAS (Compare And Swap) this is not a synchronization mechanism. It is a atomic operation that allows us to update a value in main memory, simultaneity testing if that value has not changed (or is what we expect it to be). There is no locking involved. Although they are used by some synchronization primitives (semaphores, mutexes). Lest's take a look on the following example:
a = 1;
--------------------------------
Thread 1 | Thread 2
b = 1 + a | b = 2 + a
cas(*a, 1, b ) | cas(*a, 1, b )
Now one of the CAS-es will fail, and what I mean by that is that it will return false. The other will return true and the value that pointer *a represents will be updated with new value. If we didn't use CAS but instead just updated the value, like this:
a = 1;
--------------------------------
Thread 1 | Thread 2
b = 1 + a | b = 2 + a
a = b | a = b
At the end of this computation the a could be 2 or 3 and both threads would complete happily not knowing what value was saved in a. This is what is called a data race and CAS is a way to solved that.
The existence of CAS enable us to write some lock-free algorithms (no synchronization needed) like collections in the java.util.concurrent package, that do not need to be synchronized, to be accessed concurrently.
Now I mentioned that CAS is used to implement synchronization. That is why the cost of acquiring a lock and perform a CAS is almost the same (if there is no contention !!!!) And in that send you get hardware support for the synchronized key word.
synchronized(this){
n = n + 1;
}
AtomicLong al = new AtomicLong();
al.updateAndGet( n -> n + 1)
The performance hit that you might get when using CAS vs synchronize comes from When your CAS fails you can just retry while usage of synchronize might result in thread going to sleep to the os. Going in to the rabbit hole of context switches (that might or might not happen :) depending on the os).
Now for the notify(), notifyAll() and wait()
. Calls directly to the thread scheduler that is part of the OS. The scheduler has two queues Wait Queue and Run Queue. When you invoke the wait on the thread, that thread is placed in the wq and sit's there until it get's notify and place in the rq for to be executed as soon as possible.
In Java there are two basic thread synchronization one via (wait(), notify()) is called cooperation and other via locks called mutual exclusion (mutex). And this are generally to parallel tracks to do thinks at once.
Now I don't know how the synchronization was done before Java 5. But now you have 2 ways to synchronize using object (one of the might be old the other new).
Biased Lock. Thread id is put in object header and then when that same specific thread wants to lock, unlock that object that operation cost us nothing. This is why if our app has a lot of uncontended locks this can give us significant performance boost. As we can avoid a second path:
(this is probably the old one) using
monitorenter/monitorexit
. This are bytecode instructions. That are placed on entry and exit of thesynchronize {...}
statement. This is where the object identity becomes relevant. As it becomes part of the lock information.
OK, that it. I know i didn't answer the question fully. The subject is so complicated and so difficult. The chapter 17 in "Java Language Specification" titled: "Java Memory Model" is probably the only one that can't be read by regular programmers (Maybe the dynamic dispatch also falls under that category :)). My hope is that at least you will be able to google the correct words.
Couple of links:
https://www.artima.com/insidejvm/ed2/threadsynchP.html (monitorenter/monitorexit, explanation)
https://www.ibm.com/developerworks/library/j-jtp10185/index.html (how lock are optimized inside jvm)
answered Nov 21 '18 at 14:04


piotr szybickipiotr szybicki
631410
631410
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53400931%2fdoes-the-synchronized-construct-in-java-use-internally-and-somehow-the-hardwar%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
stackoverflow.com/questions/1485924/how-are-mutexes-implemented may be helpful.
– Alexey Romanov
Nov 21 '18 at 7:04
I fear this cannot be answered comprehensively - this strictly depends on the runtime implementation for the specific platform. There may even be implementations for platforms that don't provide any hardware support for such operations. On such a platform, the runtime would have to emulate these things on application level.
– Hulk
Nov 23 '18 at 8:30