Time complexity of Parallel Reduction Algorithm
up vote
1
down vote
favorite
Currently I am studying GPU architecture and its concepts. In parallel Reduction technique, how is the time complexity shown on 29th slide in following NVIDIA guide come O(N/P + log N)? I know that for N threads, it will be O(log N). If we have P threads parallel available then time complexity should be O((N/P)*log P). Right? Where am I wrong here?
Parallel Reduction Techniques
parallel-processing cuda time-complexity gpu-programming reduction
add a comment |
up vote
1
down vote
favorite
Currently I am studying GPU architecture and its concepts. In parallel Reduction technique, how is the time complexity shown on 29th slide in following NVIDIA guide come O(N/P + log N)? I know that for N threads, it will be O(log N). If we have P threads parallel available then time complexity should be O((N/P)*log P). Right? Where am I wrong here?
Parallel Reduction Techniques
parallel-processing cuda time-complexity gpu-programming reduction
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Currently I am studying GPU architecture and its concepts. In parallel Reduction technique, how is the time complexity shown on 29th slide in following NVIDIA guide come O(N/P + log N)? I know that for N threads, it will be O(log N). If we have P threads parallel available then time complexity should be O((N/P)*log P). Right? Where am I wrong here?
Parallel Reduction Techniques
parallel-processing cuda time-complexity gpu-programming reduction
Currently I am studying GPU architecture and its concepts. In parallel Reduction technique, how is the time complexity shown on 29th slide in following NVIDIA guide come O(N/P + log N)? I know that for N threads, it will be O(log N). If we have P threads parallel available then time complexity should be O((N/P)*log P). Right? Where am I wrong here?
Parallel Reduction Techniques
parallel-processing cuda time-complexity gpu-programming reduction
parallel-processing cuda time-complexity gpu-programming reduction
asked yesterday
Tapan Modi
82
82
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
up vote
0
down vote
accepted
I would like to explain this with an example, consider this array with N=8 elements
1 2 3 4 5 6 7 8
The parallel reduction will occur in following steps
1 2 3 4 5 6 7 8
3 7 11 15
10 26
36
If you count the number of reduction operations, we have 4,2 and 1 on first, second and third step respectively. So total number of operations we have is 4+2+1=7=N-1 and we do all the reductions in O(N) and we also have log(8)=3 (this is log to base 2) steps so we pay a cost to do these steps which is O(logN). Hence if we used a single thread to reduce in this way we add the two costs as they occur separately of each other and we have O(N+logN). Where O(N) is cost for doing all operations and O(logN) is cost for doing all steps. Now there is no way to parallelize the cost for steps since they have to happen sequentially. However we can use multiple threads to do the operations and divide the O(N) cost to O(N/P). Therefore we have
Total cost = O(N/P + logN)
add a comment |
up vote
1
down vote
I'm not familiar with cuda, but usualy in parallel reductions you do
- first a local reduction on each processors, which would take O(N/P), and then
- compute a reduction of the P local results, which takes O(log P) step.
Hence you get O(N/P + log P).
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
accepted
I would like to explain this with an example, consider this array with N=8 elements
1 2 3 4 5 6 7 8
The parallel reduction will occur in following steps
1 2 3 4 5 6 7 8
3 7 11 15
10 26
36
If you count the number of reduction operations, we have 4,2 and 1 on first, second and third step respectively. So total number of operations we have is 4+2+1=7=N-1 and we do all the reductions in O(N) and we also have log(8)=3 (this is log to base 2) steps so we pay a cost to do these steps which is O(logN). Hence if we used a single thread to reduce in this way we add the two costs as they occur separately of each other and we have O(N+logN). Where O(N) is cost for doing all operations and O(logN) is cost for doing all steps. Now there is no way to parallelize the cost for steps since they have to happen sequentially. However we can use multiple threads to do the operations and divide the O(N) cost to O(N/P). Therefore we have
Total cost = O(N/P + logN)
add a comment |
up vote
0
down vote
accepted
I would like to explain this with an example, consider this array with N=8 elements
1 2 3 4 5 6 7 8
The parallel reduction will occur in following steps
1 2 3 4 5 6 7 8
3 7 11 15
10 26
36
If you count the number of reduction operations, we have 4,2 and 1 on first, second and third step respectively. So total number of operations we have is 4+2+1=7=N-1 and we do all the reductions in O(N) and we also have log(8)=3 (this is log to base 2) steps so we pay a cost to do these steps which is O(logN). Hence if we used a single thread to reduce in this way we add the two costs as they occur separately of each other and we have O(N+logN). Where O(N) is cost for doing all operations and O(logN) is cost for doing all steps. Now there is no way to parallelize the cost for steps since they have to happen sequentially. However we can use multiple threads to do the operations and divide the O(N) cost to O(N/P). Therefore we have
Total cost = O(N/P + logN)
add a comment |
up vote
0
down vote
accepted
up vote
0
down vote
accepted
I would like to explain this with an example, consider this array with N=8 elements
1 2 3 4 5 6 7 8
The parallel reduction will occur in following steps
1 2 3 4 5 6 7 8
3 7 11 15
10 26
36
If you count the number of reduction operations, we have 4,2 and 1 on first, second and third step respectively. So total number of operations we have is 4+2+1=7=N-1 and we do all the reductions in O(N) and we also have log(8)=3 (this is log to base 2) steps so we pay a cost to do these steps which is O(logN). Hence if we used a single thread to reduce in this way we add the two costs as they occur separately of each other and we have O(N+logN). Where O(N) is cost for doing all operations and O(logN) is cost for doing all steps. Now there is no way to parallelize the cost for steps since they have to happen sequentially. However we can use multiple threads to do the operations and divide the O(N) cost to O(N/P). Therefore we have
Total cost = O(N/P + logN)
I would like to explain this with an example, consider this array with N=8 elements
1 2 3 4 5 6 7 8
The parallel reduction will occur in following steps
1 2 3 4 5 6 7 8
3 7 11 15
10 26
36
If you count the number of reduction operations, we have 4,2 and 1 on first, second and third step respectively. So total number of operations we have is 4+2+1=7=N-1 and we do all the reductions in O(N) and we also have log(8)=3 (this is log to base 2) steps so we pay a cost to do these steps which is O(logN). Hence if we used a single thread to reduce in this way we add the two costs as they occur separately of each other and we have O(N+logN). Where O(N) is cost for doing all operations and O(logN) is cost for doing all steps. Now there is no way to parallelize the cost for steps since they have to happen sequentially. However we can use multiple threads to do the operations and divide the O(N) cost to O(N/P). Therefore we have
Total cost = O(N/P + logN)
edited yesterday
answered yesterday
Nirvedh Meshram
17010
17010
add a comment |
add a comment |
up vote
1
down vote
I'm not familiar with cuda, but usualy in parallel reductions you do
- first a local reduction on each processors, which would take O(N/P), and then
- compute a reduction of the P local results, which takes O(log P) step.
Hence you get O(N/P + log P).
add a comment |
up vote
1
down vote
I'm not familiar with cuda, but usualy in parallel reductions you do
- first a local reduction on each processors, which would take O(N/P), and then
- compute a reduction of the P local results, which takes O(log P) step.
Hence you get O(N/P + log P).
add a comment |
up vote
1
down vote
up vote
1
down vote
I'm not familiar with cuda, but usualy in parallel reductions you do
- first a local reduction on each processors, which would take O(N/P), and then
- compute a reduction of the P local results, which takes O(log P) step.
Hence you get O(N/P + log P).
I'm not familiar with cuda, but usualy in parallel reductions you do
- first a local reduction on each processors, which would take O(N/P), and then
- compute a reduction of the P local results, which takes O(log P) step.
Hence you get O(N/P + log P).
answered yesterday
Julien
794
794
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53372549%2ftime-complexity-of-parallel-reduction-algorithm%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown