Skip to content

Instantly share code, notes, and snippets.

@falcondai
Last active April 1, 2021 09:12
Show Gist options
  • Save falcondai/561d5eec7fed9ebf48751d124a77b087 to your computer and use it in GitHub Desktop.
Save falcondai/561d5eec7fed9ebf48751d124a77b087 to your computer and use it in GitHub Desktop.
Tensorflow implementation of guided backpropagation through ReLU
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_nn_ops
@ops.RegisterGradient("GuidedRelu")
def _GuidedReluGrad(op, grad):
return tf.select(0. < grad, gen_nn_ops._relu_grad(grad, op.outputs[0]), tf.zeros(grad.get_shape()))
if __name__ == '__main__':
with tf.Session() as sess:
g = tf.get_default_graph()
x = tf.constant([10., 2.])
with g.gradient_override_map({'Relu': 'GuidedRelu'}):
y = tf.nn.relu(x)
z = tf.reduce_sum(-y ** 2)
tf.initialize_all_variables().run()
print x.eval(), y.eval(), z.eval(), tf.gradients(z, x)[0].eval()
# > [ 10. 2.] [ 10. 2.] -104.0 [ 0. 0.]
@JonGerrand
Copy link

Could change tf.zeros(grad.get_shape()) to tf.zeros_like(grad) to make the implementation robust to incomplete tensor shapes.

Otherwise this is a great gist! It really helped me out. Thank you @falcondai

@sergii-bond
Copy link

This works after replacing tf.select by tf.where for Tensorflow 1.2
Thank you @facondai

@mbasirat
Copy link

mbasirat commented Jul 21, 2018

How is it possible to compute GuidedGrad for other activations functions which their gradients are not included in gen_nn_ops

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment