How can I compute the gradient w.r.t. a non-variable in TensorFlow's eager execution mode?
I am trying to compute the gradient of my model's loss with respect to its input in order to create an adversarial example. Since the model's input is non-trainable, I need to compute the gradient with respect to a tensor, not a variable. However, I found that TensorFlow's GradientTape
returns None
gradients if the tensor is not a trainable variable:
import numpy as np
import tensorflow as tf
tf.enable_eager_execution()
a = tf.convert_to_tensor(np.array([1., 2., 3.]), dtype=tf.float32)
b = tf.constant([1., 2., 3.])
c = tf.Variable([1., 2., 3.], trainable=False)
d = tf.Variable([1., 2., 3.], trainable=True)
with tf.GradientTape() as tape:
result = a + b + c + d
grads = tape.gradient(result, [a, b, c, d])
print(grads)
prints:
[None, None, None, <tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>]
I went through TensorFlow's Eager Execution tutorial and the Eager Execution guide, but couldn't find a solution for calculating the gradient w.r.t. a tensor.
python tensorflow
add a comment |
I am trying to compute the gradient of my model's loss with respect to its input in order to create an adversarial example. Since the model's input is non-trainable, I need to compute the gradient with respect to a tensor, not a variable. However, I found that TensorFlow's GradientTape
returns None
gradients if the tensor is not a trainable variable:
import numpy as np
import tensorflow as tf
tf.enable_eager_execution()
a = tf.convert_to_tensor(np.array([1., 2., 3.]), dtype=tf.float32)
b = tf.constant([1., 2., 3.])
c = tf.Variable([1., 2., 3.], trainable=False)
d = tf.Variable([1., 2., 3.], trainable=True)
with tf.GradientTape() as tape:
result = a + b + c + d
grads = tape.gradient(result, [a, b, c, d])
print(grads)
prints:
[None, None, None, <tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>]
I went through TensorFlow's Eager Execution tutorial and the Eager Execution guide, but couldn't find a solution for calculating the gradient w.r.t. a tensor.
python tensorflow
add a comment |
I am trying to compute the gradient of my model's loss with respect to its input in order to create an adversarial example. Since the model's input is non-trainable, I need to compute the gradient with respect to a tensor, not a variable. However, I found that TensorFlow's GradientTape
returns None
gradients if the tensor is not a trainable variable:
import numpy as np
import tensorflow as tf
tf.enable_eager_execution()
a = tf.convert_to_tensor(np.array([1., 2., 3.]), dtype=tf.float32)
b = tf.constant([1., 2., 3.])
c = tf.Variable([1., 2., 3.], trainable=False)
d = tf.Variable([1., 2., 3.], trainable=True)
with tf.GradientTape() as tape:
result = a + b + c + d
grads = tape.gradient(result, [a, b, c, d])
print(grads)
prints:
[None, None, None, <tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>]
I went through TensorFlow's Eager Execution tutorial and the Eager Execution guide, but couldn't find a solution for calculating the gradient w.r.t. a tensor.
python tensorflow
I am trying to compute the gradient of my model's loss with respect to its input in order to create an adversarial example. Since the model's input is non-trainable, I need to compute the gradient with respect to a tensor, not a variable. However, I found that TensorFlow's GradientTape
returns None
gradients if the tensor is not a trainable variable:
import numpy as np
import tensorflow as tf
tf.enable_eager_execution()
a = tf.convert_to_tensor(np.array([1., 2., 3.]), dtype=tf.float32)
b = tf.constant([1., 2., 3.])
c = tf.Variable([1., 2., 3.], trainable=False)
d = tf.Variable([1., 2., 3.], trainable=True)
with tf.GradientTape() as tape:
result = a + b + c + d
grads = tape.gradient(result, [a, b, c, d])
print(grads)
prints:
[None, None, None, <tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>]
I went through TensorFlow's Eager Execution tutorial and the Eager Execution guide, but couldn't find a solution for calculating the gradient w.r.t. a tensor.
python tensorflow
python tensorflow
asked Nov 19 '18 at 22:27


Kilian BatznerKilian Batzner
2,42311832
2,42311832
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
The tf.GradientTape
documentation reveals the simple solution:
Trainable variables (created by
tf.Variable
ortf.get_variable
, wheretrainable=True
is default in both cases) are automatically watched. Tensors can be manually watched by invoking thewatch
method on this context manager.
In this case,
with tf.GradientTape() as tape:
tape.watch(a)
tape.watch(b)
tape.watch(c)
result = a + b + c + d
grads = tape.gradient(result, [a, b, c, d])
will result in print(grads)
:
[<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>]
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53383559%2fhow-can-i-compute-the-gradient-w-r-t-a-non-variable-in-tensorflows-eager-execu%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The tf.GradientTape
documentation reveals the simple solution:
Trainable variables (created by
tf.Variable
ortf.get_variable
, wheretrainable=True
is default in both cases) are automatically watched. Tensors can be manually watched by invoking thewatch
method on this context manager.
In this case,
with tf.GradientTape() as tape:
tape.watch(a)
tape.watch(b)
tape.watch(c)
result = a + b + c + d
grads = tape.gradient(result, [a, b, c, d])
will result in print(grads)
:
[<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>]
add a comment |
The tf.GradientTape
documentation reveals the simple solution:
Trainable variables (created by
tf.Variable
ortf.get_variable
, wheretrainable=True
is default in both cases) are automatically watched. Tensors can be manually watched by invoking thewatch
method on this context manager.
In this case,
with tf.GradientTape() as tape:
tape.watch(a)
tape.watch(b)
tape.watch(c)
result = a + b + c + d
grads = tape.gradient(result, [a, b, c, d])
will result in print(grads)
:
[<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>]
add a comment |
The tf.GradientTape
documentation reveals the simple solution:
Trainable variables (created by
tf.Variable
ortf.get_variable
, wheretrainable=True
is default in both cases) are automatically watched. Tensors can be manually watched by invoking thewatch
method on this context manager.
In this case,
with tf.GradientTape() as tape:
tape.watch(a)
tape.watch(b)
tape.watch(c)
result = a + b + c + d
grads = tape.gradient(result, [a, b, c, d])
will result in print(grads)
:
[<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>]
The tf.GradientTape
documentation reveals the simple solution:
Trainable variables (created by
tf.Variable
ortf.get_variable
, wheretrainable=True
is default in both cases) are automatically watched. Tensors can be manually watched by invoking thewatch
method on this context manager.
In this case,
with tf.GradientTape() as tape:
tape.watch(a)
tape.watch(b)
tape.watch(c)
result = a + b + c + d
grads = tape.gradient(result, [a, b, c, d])
will result in print(grads)
:
[<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>,
<tf.Tensor: id=26, shape=(3,), dtype=float32, numpy=array([1., 1., 1.], dtype=float32)>]
answered Nov 19 '18 at 22:27


Kilian BatznerKilian Batzner
2,42311832
2,42311832
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53383559%2fhow-can-i-compute-the-gradient-w-r-t-a-non-variable-in-tensorflows-eager-execu%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown