Numpy, inserting one matrix in another matrix efficiently?












0















I am trying to make an outer product of two vectors more efficient by removing zero elements, doing the outer product and then enlarging the resulting matrix with rows of zeros or inserting into a zero matrix. (Sparsifying the matrix using scipy does not really work since the cost for conversion is high and I am doing it over and over again.)



import numpy
dim = 100
vec = np.random.rand(1, dim)
mask = np.flatnonzero(vec > 0.8)
vec_sp = vec[:, mask]
mat_sp = vec_sp.T * vec_sp # This is faster than dot product
# Enlarge matrix or insert into zero matrix


Since it is the outer product of two vectors I know the zero rows and columns in the original matrix, they are are indices in the mask variable. To see this,



a = np.array(((1,0,2,0))).reshape(1,-1)
a.T * a
>> array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


I have tried two different solutions: one using numpy's insert method and append method to the mat_spvariable. The whole thing becomes a for-loop and really slow.



for val in mask:
if val < mat_sp.shape[0]:
mat_sp = np.insert(mat_sp, val, values=0, axis=1)
mat_sp = np.insert(mat_sp, val, values=0, axis=0)
else:
mat_sp = np.append(mat_sp, values=np.zeros((mat_sp.shape[0], 1)), axis=1)
mat_sp = np.append(mat_sp, values=np.zeros((1, mat_sp.shape[1])), axis=0)


The other approach is creating a zero matrix of size dim x dim and then creating a giant index vector from the mask, through two for loops. And then using the index vector to insert the matrix multiplication into the zero matrix. However, this is also super slow.



Any ideas or insights that could solve the problem efficiently would be great as the sparse matrix product takes 2/3 of the time of the non-sparse.





Using the example of @hjpaul we get the following comparison code



import numpy as np
dims = 400

def test_non_sparse():
vec = np.random.rand(1, dims)
a = vec.T * vec

def test_sparse():
vec = np.random.rand(1, dims)
idx = np.flatnonzero(vec>0.75)
oprod = vec[:,idx].T * vec[:,idx]
vec_oprod = np.zeros((dims, dims))
vec_oprod[idx[:,None], idx] = oprod


if __name__ == '__main__':
import timeit
print('Non sparse:',timeit.timeit("test_non_sparse()", setup="from __main__ import test_non_sparse", number=10000))
print('Sparse:',timeit.timeit("test_sparse()", setup="from __main__ import test_sparse", number=10000))


The code gives an improvement of course depending on the dimensions of the vectors and the number of zeros. Above 300 dimensions and around 70% zeros gives a modest speed improvement that is increasing with the number of zero elements and dimensions. If the matrices and mask are the same over and over again it is surely possible to get a greater speedup.



(My fault in doing logical indexing was doing idx instead of idx[:,None])










share|improve this question

























  • what are the dimensions you're dealing with? Also if the zeros are not in blocks not sure you'll gain any efficiency with this transformation.

    – karakfa
    Nov 20 '18 at 20:26











  • Both np.insert and np.append create new arrays. They don't work in-place with points like list operations. Using them repeatedly in a loop is a bad idea. And read their code to see just how much work they do.

    – hpaulj
    Nov 20 '18 at 20:33













  • "I am trying to make a vector dot product more efficient by removing zero elements, doing the dot product and then enlarging the resulting matrix with rows of zeros or inserting into a zero matrix." - it seems like you'd spend more time fiddling with the zeros than you save in the dot product. Also, a vector dot product produces a scalar.

    – user2357112
    Nov 20 '18 at 20:34






  • 1





    Are you talking about an outer product?

    – user2357112
    Nov 20 '18 at 20:35











  • @karakfa For any dimension but say around 100-1000. I have timed the code and it gives a speed improvement (given that I can get it back to its original form).

    – Reed Richards
    Nov 20 '18 at 20:58
















0















I am trying to make an outer product of two vectors more efficient by removing zero elements, doing the outer product and then enlarging the resulting matrix with rows of zeros or inserting into a zero matrix. (Sparsifying the matrix using scipy does not really work since the cost for conversion is high and I am doing it over and over again.)



import numpy
dim = 100
vec = np.random.rand(1, dim)
mask = np.flatnonzero(vec > 0.8)
vec_sp = vec[:, mask]
mat_sp = vec_sp.T * vec_sp # This is faster than dot product
# Enlarge matrix or insert into zero matrix


Since it is the outer product of two vectors I know the zero rows and columns in the original matrix, they are are indices in the mask variable. To see this,



a = np.array(((1,0,2,0))).reshape(1,-1)
a.T * a
>> array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


I have tried two different solutions: one using numpy's insert method and append method to the mat_spvariable. The whole thing becomes a for-loop and really slow.



for val in mask:
if val < mat_sp.shape[0]:
mat_sp = np.insert(mat_sp, val, values=0, axis=1)
mat_sp = np.insert(mat_sp, val, values=0, axis=0)
else:
mat_sp = np.append(mat_sp, values=np.zeros((mat_sp.shape[0], 1)), axis=1)
mat_sp = np.append(mat_sp, values=np.zeros((1, mat_sp.shape[1])), axis=0)


The other approach is creating a zero matrix of size dim x dim and then creating a giant index vector from the mask, through two for loops. And then using the index vector to insert the matrix multiplication into the zero matrix. However, this is also super slow.



Any ideas or insights that could solve the problem efficiently would be great as the sparse matrix product takes 2/3 of the time of the non-sparse.





Using the example of @hjpaul we get the following comparison code



import numpy as np
dims = 400

def test_non_sparse():
vec = np.random.rand(1, dims)
a = vec.T * vec

def test_sparse():
vec = np.random.rand(1, dims)
idx = np.flatnonzero(vec>0.75)
oprod = vec[:,idx].T * vec[:,idx]
vec_oprod = np.zeros((dims, dims))
vec_oprod[idx[:,None], idx] = oprod


if __name__ == '__main__':
import timeit
print('Non sparse:',timeit.timeit("test_non_sparse()", setup="from __main__ import test_non_sparse", number=10000))
print('Sparse:',timeit.timeit("test_sparse()", setup="from __main__ import test_sparse", number=10000))


The code gives an improvement of course depending on the dimensions of the vectors and the number of zeros. Above 300 dimensions and around 70% zeros gives a modest speed improvement that is increasing with the number of zero elements and dimensions. If the matrices and mask are the same over and over again it is surely possible to get a greater speedup.



(My fault in doing logical indexing was doing idx instead of idx[:,None])










share|improve this question

























  • what are the dimensions you're dealing with? Also if the zeros are not in blocks not sure you'll gain any efficiency with this transformation.

    – karakfa
    Nov 20 '18 at 20:26











  • Both np.insert and np.append create new arrays. They don't work in-place with points like list operations. Using them repeatedly in a loop is a bad idea. And read their code to see just how much work they do.

    – hpaulj
    Nov 20 '18 at 20:33













  • "I am trying to make a vector dot product more efficient by removing zero elements, doing the dot product and then enlarging the resulting matrix with rows of zeros or inserting into a zero matrix." - it seems like you'd spend more time fiddling with the zeros than you save in the dot product. Also, a vector dot product produces a scalar.

    – user2357112
    Nov 20 '18 at 20:34






  • 1





    Are you talking about an outer product?

    – user2357112
    Nov 20 '18 at 20:35











  • @karakfa For any dimension but say around 100-1000. I have timed the code and it gives a speed improvement (given that I can get it back to its original form).

    – Reed Richards
    Nov 20 '18 at 20:58














0












0








0








I am trying to make an outer product of two vectors more efficient by removing zero elements, doing the outer product and then enlarging the resulting matrix with rows of zeros or inserting into a zero matrix. (Sparsifying the matrix using scipy does not really work since the cost for conversion is high and I am doing it over and over again.)



import numpy
dim = 100
vec = np.random.rand(1, dim)
mask = np.flatnonzero(vec > 0.8)
vec_sp = vec[:, mask]
mat_sp = vec_sp.T * vec_sp # This is faster than dot product
# Enlarge matrix or insert into zero matrix


Since it is the outer product of two vectors I know the zero rows and columns in the original matrix, they are are indices in the mask variable. To see this,



a = np.array(((1,0,2,0))).reshape(1,-1)
a.T * a
>> array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


I have tried two different solutions: one using numpy's insert method and append method to the mat_spvariable. The whole thing becomes a for-loop and really slow.



for val in mask:
if val < mat_sp.shape[0]:
mat_sp = np.insert(mat_sp, val, values=0, axis=1)
mat_sp = np.insert(mat_sp, val, values=0, axis=0)
else:
mat_sp = np.append(mat_sp, values=np.zeros((mat_sp.shape[0], 1)), axis=1)
mat_sp = np.append(mat_sp, values=np.zeros((1, mat_sp.shape[1])), axis=0)


The other approach is creating a zero matrix of size dim x dim and then creating a giant index vector from the mask, through two for loops. And then using the index vector to insert the matrix multiplication into the zero matrix. However, this is also super slow.



Any ideas or insights that could solve the problem efficiently would be great as the sparse matrix product takes 2/3 of the time of the non-sparse.





Using the example of @hjpaul we get the following comparison code



import numpy as np
dims = 400

def test_non_sparse():
vec = np.random.rand(1, dims)
a = vec.T * vec

def test_sparse():
vec = np.random.rand(1, dims)
idx = np.flatnonzero(vec>0.75)
oprod = vec[:,idx].T * vec[:,idx]
vec_oprod = np.zeros((dims, dims))
vec_oprod[idx[:,None], idx] = oprod


if __name__ == '__main__':
import timeit
print('Non sparse:',timeit.timeit("test_non_sparse()", setup="from __main__ import test_non_sparse", number=10000))
print('Sparse:',timeit.timeit("test_sparse()", setup="from __main__ import test_sparse", number=10000))


The code gives an improvement of course depending on the dimensions of the vectors and the number of zeros. Above 300 dimensions and around 70% zeros gives a modest speed improvement that is increasing with the number of zero elements and dimensions. If the matrices and mask are the same over and over again it is surely possible to get a greater speedup.



(My fault in doing logical indexing was doing idx instead of idx[:,None])










share|improve this question
















I am trying to make an outer product of two vectors more efficient by removing zero elements, doing the outer product and then enlarging the resulting matrix with rows of zeros or inserting into a zero matrix. (Sparsifying the matrix using scipy does not really work since the cost for conversion is high and I am doing it over and over again.)



import numpy
dim = 100
vec = np.random.rand(1, dim)
mask = np.flatnonzero(vec > 0.8)
vec_sp = vec[:, mask]
mat_sp = vec_sp.T * vec_sp # This is faster than dot product
# Enlarge matrix or insert into zero matrix


Since it is the outer product of two vectors I know the zero rows and columns in the original matrix, they are are indices in the mask variable. To see this,



a = np.array(((1,0,2,0))).reshape(1,-1)
a.T * a
>> array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


I have tried two different solutions: one using numpy's insert method and append method to the mat_spvariable. The whole thing becomes a for-loop and really slow.



for val in mask:
if val < mat_sp.shape[0]:
mat_sp = np.insert(mat_sp, val, values=0, axis=1)
mat_sp = np.insert(mat_sp, val, values=0, axis=0)
else:
mat_sp = np.append(mat_sp, values=np.zeros((mat_sp.shape[0], 1)), axis=1)
mat_sp = np.append(mat_sp, values=np.zeros((1, mat_sp.shape[1])), axis=0)


The other approach is creating a zero matrix of size dim x dim and then creating a giant index vector from the mask, through two for loops. And then using the index vector to insert the matrix multiplication into the zero matrix. However, this is also super slow.



Any ideas or insights that could solve the problem efficiently would be great as the sparse matrix product takes 2/3 of the time of the non-sparse.





Using the example of @hjpaul we get the following comparison code



import numpy as np
dims = 400

def test_non_sparse():
vec = np.random.rand(1, dims)
a = vec.T * vec

def test_sparse():
vec = np.random.rand(1, dims)
idx = np.flatnonzero(vec>0.75)
oprod = vec[:,idx].T * vec[:,idx]
vec_oprod = np.zeros((dims, dims))
vec_oprod[idx[:,None], idx] = oprod


if __name__ == '__main__':
import timeit
print('Non sparse:',timeit.timeit("test_non_sparse()", setup="from __main__ import test_non_sparse", number=10000))
print('Sparse:',timeit.timeit("test_sparse()", setup="from __main__ import test_sparse", number=10000))


The code gives an improvement of course depending on the dimensions of the vectors and the number of zeros. Above 300 dimensions and around 70% zeros gives a modest speed improvement that is increasing with the number of zero elements and dimensions. If the matrices and mask are the same over and over again it is surely possible to get a greater speedup.



(My fault in doing logical indexing was doing idx instead of idx[:,None])







python numpy sparse-matrix matrix-multiplication numpy-broadcasting






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 21 '18 at 12:57







Reed Richards

















asked Nov 20 '18 at 20:21









Reed RichardsReed Richards

1,83862843




1,83862843













  • what are the dimensions you're dealing with? Also if the zeros are not in blocks not sure you'll gain any efficiency with this transformation.

    – karakfa
    Nov 20 '18 at 20:26











  • Both np.insert and np.append create new arrays. They don't work in-place with points like list operations. Using them repeatedly in a loop is a bad idea. And read their code to see just how much work they do.

    – hpaulj
    Nov 20 '18 at 20:33













  • "I am trying to make a vector dot product more efficient by removing zero elements, doing the dot product and then enlarging the resulting matrix with rows of zeros or inserting into a zero matrix." - it seems like you'd spend more time fiddling with the zeros than you save in the dot product. Also, a vector dot product produces a scalar.

    – user2357112
    Nov 20 '18 at 20:34






  • 1





    Are you talking about an outer product?

    – user2357112
    Nov 20 '18 at 20:35











  • @karakfa For any dimension but say around 100-1000. I have timed the code and it gives a speed improvement (given that I can get it back to its original form).

    – Reed Richards
    Nov 20 '18 at 20:58



















  • what are the dimensions you're dealing with? Also if the zeros are not in blocks not sure you'll gain any efficiency with this transformation.

    – karakfa
    Nov 20 '18 at 20:26











  • Both np.insert and np.append create new arrays. They don't work in-place with points like list operations. Using them repeatedly in a loop is a bad idea. And read their code to see just how much work they do.

    – hpaulj
    Nov 20 '18 at 20:33













  • "I am trying to make a vector dot product more efficient by removing zero elements, doing the dot product and then enlarging the resulting matrix with rows of zeros or inserting into a zero matrix." - it seems like you'd spend more time fiddling with the zeros than you save in the dot product. Also, a vector dot product produces a scalar.

    – user2357112
    Nov 20 '18 at 20:34






  • 1





    Are you talking about an outer product?

    – user2357112
    Nov 20 '18 at 20:35











  • @karakfa For any dimension but say around 100-1000. I have timed the code and it gives a speed improvement (given that I can get it back to its original form).

    – Reed Richards
    Nov 20 '18 at 20:58

















what are the dimensions you're dealing with? Also if the zeros are not in blocks not sure you'll gain any efficiency with this transformation.

– karakfa
Nov 20 '18 at 20:26





what are the dimensions you're dealing with? Also if the zeros are not in blocks not sure you'll gain any efficiency with this transformation.

– karakfa
Nov 20 '18 at 20:26













Both np.insert and np.append create new arrays. They don't work in-place with points like list operations. Using them repeatedly in a loop is a bad idea. And read their code to see just how much work they do.

– hpaulj
Nov 20 '18 at 20:33







Both np.insert and np.append create new arrays. They don't work in-place with points like list operations. Using them repeatedly in a loop is a bad idea. And read their code to see just how much work they do.

– hpaulj
Nov 20 '18 at 20:33















"I am trying to make a vector dot product more efficient by removing zero elements, doing the dot product and then enlarging the resulting matrix with rows of zeros or inserting into a zero matrix." - it seems like you'd spend more time fiddling with the zeros than you save in the dot product. Also, a vector dot product produces a scalar.

– user2357112
Nov 20 '18 at 20:34





"I am trying to make a vector dot product more efficient by removing zero elements, doing the dot product and then enlarging the resulting matrix with rows of zeros or inserting into a zero matrix." - it seems like you'd spend more time fiddling with the zeros than you save in the dot product. Also, a vector dot product produces a scalar.

– user2357112
Nov 20 '18 at 20:34




1




1





Are you talking about an outer product?

– user2357112
Nov 20 '18 at 20:35





Are you talking about an outer product?

– user2357112
Nov 20 '18 at 20:35













@karakfa For any dimension but say around 100-1000. I have timed the code and it gives a speed improvement (given that I can get it back to its original form).

– Reed Richards
Nov 20 '18 at 20:58





@karakfa For any dimension but say around 100-1000. I have timed the code and it gives a speed improvement (given that I can get it back to its original form).

– Reed Richards
Nov 20 '18 at 20:58












1 Answer
1






active

oldest

votes


















1














The fastest way to insert one matrix into another is with indices.



To illustrate with your outer product:



In [94]: a = np.array(((1,0,2,0)))
In [95]: idx = np.where(a>0)[0]
In [96]: idx
Out[96]: array([0, 2])
In [97]: a1 = a[idx]


The outer product of the condense array:



In [98]: a2 = a1[:,None]*a1
In [99]: a2
Out[99]:
array([[1, 2],
[2, 4]])


Set up the result array, and use block indexing to identify where the a2 values go:



In [100]: res = np.zeros((4,4),int)
In [101]: res[idx[:,None], idx] = a2
In [102]: res
Out[102]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


Direct outer of the uncondensed array:



In [103]: a[:,None]*a
Out[103]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])
In [104]: np.outer(a,a)
Out[104]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


If a was 2d, (n,1), this outer could be written as np.dot(a.T,a). dot involves a sum, in this case of the size 1 dimension.



I think a would have to quite sparse to benefit from this extra indexing work. With scipy sparse matrices I find that a sparsity on the order of 1% to have any speed advantage, even when the matrices are premade.





In [105]: from scipy import sparse
In [106]: A = sparse.csr_matrix(a)
In [107]: A
Out[107]:
<1x4 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Row format>
In [108]: A.A
Out[108]: array([[1, 0, 2, 0]], dtype=int64)
In [109]: A.T*A # sparse matrix product, dot
Out[109]:
<4x4 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Column format>
In [110]: _.A
Out[110]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]], dtype=int64)





share|improve this answer
























  • Did you forget that ((1,0,2,0)) is just (1,0,2,0)? You made a 1D array where it looks like you wanted a 2D array.

    – user2357112
    Nov 20 '18 at 21:58











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53400948%2fnumpy-inserting-one-matrix-in-another-matrix-efficiently%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














The fastest way to insert one matrix into another is with indices.



To illustrate with your outer product:



In [94]: a = np.array(((1,0,2,0)))
In [95]: idx = np.where(a>0)[0]
In [96]: idx
Out[96]: array([0, 2])
In [97]: a1 = a[idx]


The outer product of the condense array:



In [98]: a2 = a1[:,None]*a1
In [99]: a2
Out[99]:
array([[1, 2],
[2, 4]])


Set up the result array, and use block indexing to identify where the a2 values go:



In [100]: res = np.zeros((4,4),int)
In [101]: res[idx[:,None], idx] = a2
In [102]: res
Out[102]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


Direct outer of the uncondensed array:



In [103]: a[:,None]*a
Out[103]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])
In [104]: np.outer(a,a)
Out[104]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


If a was 2d, (n,1), this outer could be written as np.dot(a.T,a). dot involves a sum, in this case of the size 1 dimension.



I think a would have to quite sparse to benefit from this extra indexing work. With scipy sparse matrices I find that a sparsity on the order of 1% to have any speed advantage, even when the matrices are premade.





In [105]: from scipy import sparse
In [106]: A = sparse.csr_matrix(a)
In [107]: A
Out[107]:
<1x4 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Row format>
In [108]: A.A
Out[108]: array([[1, 0, 2, 0]], dtype=int64)
In [109]: A.T*A # sparse matrix product, dot
Out[109]:
<4x4 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Column format>
In [110]: _.A
Out[110]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]], dtype=int64)





share|improve this answer
























  • Did you forget that ((1,0,2,0)) is just (1,0,2,0)? You made a 1D array where it looks like you wanted a 2D array.

    – user2357112
    Nov 20 '18 at 21:58
















1














The fastest way to insert one matrix into another is with indices.



To illustrate with your outer product:



In [94]: a = np.array(((1,0,2,0)))
In [95]: idx = np.where(a>0)[0]
In [96]: idx
Out[96]: array([0, 2])
In [97]: a1 = a[idx]


The outer product of the condense array:



In [98]: a2 = a1[:,None]*a1
In [99]: a2
Out[99]:
array([[1, 2],
[2, 4]])


Set up the result array, and use block indexing to identify where the a2 values go:



In [100]: res = np.zeros((4,4),int)
In [101]: res[idx[:,None], idx] = a2
In [102]: res
Out[102]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


Direct outer of the uncondensed array:



In [103]: a[:,None]*a
Out[103]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])
In [104]: np.outer(a,a)
Out[104]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


If a was 2d, (n,1), this outer could be written as np.dot(a.T,a). dot involves a sum, in this case of the size 1 dimension.



I think a would have to quite sparse to benefit from this extra indexing work. With scipy sparse matrices I find that a sparsity on the order of 1% to have any speed advantage, even when the matrices are premade.





In [105]: from scipy import sparse
In [106]: A = sparse.csr_matrix(a)
In [107]: A
Out[107]:
<1x4 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Row format>
In [108]: A.A
Out[108]: array([[1, 0, 2, 0]], dtype=int64)
In [109]: A.T*A # sparse matrix product, dot
Out[109]:
<4x4 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Column format>
In [110]: _.A
Out[110]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]], dtype=int64)





share|improve this answer
























  • Did you forget that ((1,0,2,0)) is just (1,0,2,0)? You made a 1D array where it looks like you wanted a 2D array.

    – user2357112
    Nov 20 '18 at 21:58














1












1








1







The fastest way to insert one matrix into another is with indices.



To illustrate with your outer product:



In [94]: a = np.array(((1,0,2,0)))
In [95]: idx = np.where(a>0)[0]
In [96]: idx
Out[96]: array([0, 2])
In [97]: a1 = a[idx]


The outer product of the condense array:



In [98]: a2 = a1[:,None]*a1
In [99]: a2
Out[99]:
array([[1, 2],
[2, 4]])


Set up the result array, and use block indexing to identify where the a2 values go:



In [100]: res = np.zeros((4,4),int)
In [101]: res[idx[:,None], idx] = a2
In [102]: res
Out[102]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


Direct outer of the uncondensed array:



In [103]: a[:,None]*a
Out[103]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])
In [104]: np.outer(a,a)
Out[104]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


If a was 2d, (n,1), this outer could be written as np.dot(a.T,a). dot involves a sum, in this case of the size 1 dimension.



I think a would have to quite sparse to benefit from this extra indexing work. With scipy sparse matrices I find that a sparsity on the order of 1% to have any speed advantage, even when the matrices are premade.





In [105]: from scipy import sparse
In [106]: A = sparse.csr_matrix(a)
In [107]: A
Out[107]:
<1x4 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Row format>
In [108]: A.A
Out[108]: array([[1, 0, 2, 0]], dtype=int64)
In [109]: A.T*A # sparse matrix product, dot
Out[109]:
<4x4 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Column format>
In [110]: _.A
Out[110]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]], dtype=int64)





share|improve this answer













The fastest way to insert one matrix into another is with indices.



To illustrate with your outer product:



In [94]: a = np.array(((1,0,2,0)))
In [95]: idx = np.where(a>0)[0]
In [96]: idx
Out[96]: array([0, 2])
In [97]: a1 = a[idx]


The outer product of the condense array:



In [98]: a2 = a1[:,None]*a1
In [99]: a2
Out[99]:
array([[1, 2],
[2, 4]])


Set up the result array, and use block indexing to identify where the a2 values go:



In [100]: res = np.zeros((4,4),int)
In [101]: res[idx[:,None], idx] = a2
In [102]: res
Out[102]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


Direct outer of the uncondensed array:



In [103]: a[:,None]*a
Out[103]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])
In [104]: np.outer(a,a)
Out[104]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]])


If a was 2d, (n,1), this outer could be written as np.dot(a.T,a). dot involves a sum, in this case of the size 1 dimension.



I think a would have to quite sparse to benefit from this extra indexing work. With scipy sparse matrices I find that a sparsity on the order of 1% to have any speed advantage, even when the matrices are premade.





In [105]: from scipy import sparse
In [106]: A = sparse.csr_matrix(a)
In [107]: A
Out[107]:
<1x4 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Row format>
In [108]: A.A
Out[108]: array([[1, 0, 2, 0]], dtype=int64)
In [109]: A.T*A # sparse matrix product, dot
Out[109]:
<4x4 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Column format>
In [110]: _.A
Out[110]:
array([[1, 0, 2, 0],
[0, 0, 0, 0],
[2, 0, 4, 0],
[0, 0, 0, 0]], dtype=int64)






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 20 '18 at 20:58









hpauljhpaulj

112k779146




112k779146













  • Did you forget that ((1,0,2,0)) is just (1,0,2,0)? You made a 1D array where it looks like you wanted a 2D array.

    – user2357112
    Nov 20 '18 at 21:58



















  • Did you forget that ((1,0,2,0)) is just (1,0,2,0)? You made a 1D array where it looks like you wanted a 2D array.

    – user2357112
    Nov 20 '18 at 21:58

















Did you forget that ((1,0,2,0)) is just (1,0,2,0)? You made a 1D array where it looks like you wanted a 2D array.

– user2357112
Nov 20 '18 at 21:58





Did you forget that ((1,0,2,0)) is just (1,0,2,0)? You made a 1D array where it looks like you wanted a 2D array.

– user2357112
Nov 20 '18 at 21:58


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53400948%2fnumpy-inserting-one-matrix-in-another-matrix-efficiently%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

How to fix TextFormField cause rebuild widget in Flutter

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith