Linear combination of vectors that minimizes their L1 norm?
$begingroup$
I have a representation of an orthogonal basis for a subspace of a much larger space. I think that it would be easier to get an idea of what these basis vectors were like if they were sparse (i.e. were mostly zero), like [1,-2,1,0,0,0,0,0] instead of dense like they are presently.
If the basis completely spanned the larger space, then the most sparse representation of that basis would be [1,0,0...], [0,1,0...], [0,0,1...] and so on. However, the subspace has only about half the number of dimensions of the space it resides in.
I would like a set of linear combinations of my set of basis vectors, where the new vectors have the same span but are "sparse." One nice formalization of sparseness is the L1 norm, which can be calculated as,
$$
||x||_1 = sum_{i=0}^n |x_i|
$$
So, formally, given an $mtimes n$ matrix $A$ with orthogonal columns, I would like to find the matrix $P$ so that the column space of $PA$ is the same as that of $A$, and also so that the sum of the absolute values of the entries in $PA$ is minimized. How might this be done?
linear-algebra matrices
$endgroup$
add a comment |
$begingroup$
I have a representation of an orthogonal basis for a subspace of a much larger space. I think that it would be easier to get an idea of what these basis vectors were like if they were sparse (i.e. were mostly zero), like [1,-2,1,0,0,0,0,0] instead of dense like they are presently.
If the basis completely spanned the larger space, then the most sparse representation of that basis would be [1,0,0...], [0,1,0...], [0,0,1...] and so on. However, the subspace has only about half the number of dimensions of the space it resides in.
I would like a set of linear combinations of my set of basis vectors, where the new vectors have the same span but are "sparse." One nice formalization of sparseness is the L1 norm, which can be calculated as,
$$
||x||_1 = sum_{i=0}^n |x_i|
$$
So, formally, given an $mtimes n$ matrix $A$ with orthogonal columns, I would like to find the matrix $P$ so that the column space of $PA$ is the same as that of $A$, and also so that the sum of the absolute values of the entries in $PA$ is minimized. How might this be done?
linear-algebra matrices
$endgroup$
$begingroup$
To decrease the $L_1$ norm of your basis you can simply scale the basis vectors down. In particular, the sum of the absolute values of the entries does have a minimum.
$endgroup$
– Servaes
Jan 28 at 23:47
$begingroup$
And if the dimension of your subspace isn't too big, you could simply compute the left inverse of the matrix whose columns are your basis vectors. What are the dimensions of these spaces?
$endgroup$
– Servaes
Jan 28 at 23:48
add a comment |
$begingroup$
I have a representation of an orthogonal basis for a subspace of a much larger space. I think that it would be easier to get an idea of what these basis vectors were like if they were sparse (i.e. were mostly zero), like [1,-2,1,0,0,0,0,0] instead of dense like they are presently.
If the basis completely spanned the larger space, then the most sparse representation of that basis would be [1,0,0...], [0,1,0...], [0,0,1...] and so on. However, the subspace has only about half the number of dimensions of the space it resides in.
I would like a set of linear combinations of my set of basis vectors, where the new vectors have the same span but are "sparse." One nice formalization of sparseness is the L1 norm, which can be calculated as,
$$
||x||_1 = sum_{i=0}^n |x_i|
$$
So, formally, given an $mtimes n$ matrix $A$ with orthogonal columns, I would like to find the matrix $P$ so that the column space of $PA$ is the same as that of $A$, and also so that the sum of the absolute values of the entries in $PA$ is minimized. How might this be done?
linear-algebra matrices
$endgroup$
I have a representation of an orthogonal basis for a subspace of a much larger space. I think that it would be easier to get an idea of what these basis vectors were like if they were sparse (i.e. were mostly zero), like [1,-2,1,0,0,0,0,0] instead of dense like they are presently.
If the basis completely spanned the larger space, then the most sparse representation of that basis would be [1,0,0...], [0,1,0...], [0,0,1...] and so on. However, the subspace has only about half the number of dimensions of the space it resides in.
I would like a set of linear combinations of my set of basis vectors, where the new vectors have the same span but are "sparse." One nice formalization of sparseness is the L1 norm, which can be calculated as,
$$
||x||_1 = sum_{i=0}^n |x_i|
$$
So, formally, given an $mtimes n$ matrix $A$ with orthogonal columns, I would like to find the matrix $P$ so that the column space of $PA$ is the same as that of $A$, and also so that the sum of the absolute values of the entries in $PA$ is minimized. How might this be done?
linear-algebra matrices
linear-algebra matrices
asked Jan 28 at 22:55
Display NameDisplay Name
34010
34010
$begingroup$
To decrease the $L_1$ norm of your basis you can simply scale the basis vectors down. In particular, the sum of the absolute values of the entries does have a minimum.
$endgroup$
– Servaes
Jan 28 at 23:47
$begingroup$
And if the dimension of your subspace isn't too big, you could simply compute the left inverse of the matrix whose columns are your basis vectors. What are the dimensions of these spaces?
$endgroup$
– Servaes
Jan 28 at 23:48
add a comment |
$begingroup$
To decrease the $L_1$ norm of your basis you can simply scale the basis vectors down. In particular, the sum of the absolute values of the entries does have a minimum.
$endgroup$
– Servaes
Jan 28 at 23:47
$begingroup$
And if the dimension of your subspace isn't too big, you could simply compute the left inverse of the matrix whose columns are your basis vectors. What are the dimensions of these spaces?
$endgroup$
– Servaes
Jan 28 at 23:48
$begingroup$
To decrease the $L_1$ norm of your basis you can simply scale the basis vectors down. In particular, the sum of the absolute values of the entries does have a minimum.
$endgroup$
– Servaes
Jan 28 at 23:47
$begingroup$
To decrease the $L_1$ norm of your basis you can simply scale the basis vectors down. In particular, the sum of the absolute values of the entries does have a minimum.
$endgroup$
– Servaes
Jan 28 at 23:47
$begingroup$
And if the dimension of your subspace isn't too big, you could simply compute the left inverse of the matrix whose columns are your basis vectors. What are the dimensions of these spaces?
$endgroup$
– Servaes
Jan 28 at 23:48
$begingroup$
And if the dimension of your subspace isn't too big, you could simply compute the left inverse of the matrix whose columns are your basis vectors. What are the dimensions of these spaces?
$endgroup$
– Servaes
Jan 28 at 23:48
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3091508%2flinear-combination-of-vectors-that-minimizes-their-l1-norm%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3091508%2flinear-combination-of-vectors-that-minimizes-their-l1-norm%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
To decrease the $L_1$ norm of your basis you can simply scale the basis vectors down. In particular, the sum of the absolute values of the entries does have a minimum.
$endgroup$
– Servaes
Jan 28 at 23:47
$begingroup$
And if the dimension of your subspace isn't too big, you could simply compute the left inverse of the matrix whose columns are your basis vectors. What are the dimensions of these spaces?
$endgroup$
– Servaes
Jan 28 at 23:48