Decoding Tensors 4 : Tensorflow part A
In the last three parts [part1, part2, part3] general properties of tensors and how they are implemented in Pytorch were discussed. I think the only important and unusual property of tensors in Pytorch is ‘automatic gradient’ or ‘adgrad’ and that will be covered in one of the future articles. In this article wewill discuss how tensors are created and used in Tesnorflow. There have been a lot of changes from Tensorflow 1.x to 2.x and so we will give explicit examples in both. Three main common ways to create tensors in Tensorflow are as following :
- With ‘tf.constant’
- With ‘tf.Variable’
- With ‘tf.placeholder’
Before we discuss these topics we will like to make the following comments:
It is important to note that Tensorflow does not respect backward compatibility which I noticed from my own experience. Another twist is that the behaviour of Tensorflow for GPU and CPU may not be exactly identical. If this much pain is not enough then here comes another one. The error message you get when your model fails may have nothing to do with the actual problem. Let me give an example: My sequence to sequence model which was working fine on CPU when I tried on GP I got the error:
“tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [7600] vs. [400,19]”
Basically I am trying to use batch size ‘400’ and that is rejected. What is funny is that when I set ‘metrics=[‘accuracy’]’ things work ! You can find more about the issue here:
https://github.com/kuza55/keras-extras/issues/7
We will not go in detail here and will mostly focus on unusual and counter intuitive (at least for beginners) aspects of Tensorflow here. Our focus will be only on creating ‘tensors’ in Tensorflow.
Tensorflow tensors do not revel what values they hold unless we run them within a ‘session’ which we will cover in one of the future articles. Here we will use what is called ‘interactive session’. There is mode in Tensorflow called ‘eager execution’ for which no session is needed. In 2.x this mode is enabled by default so we will disable it so that we get the similar behaviour in 1.x and 2.x.
A. Creating tensors with ‘tf.constant’ —
>>> import tensorflow as tf>>> tf.__version__‘1.10.0’>>> T1=tf.constant([1.0,2.0,3.0], name=’T1', dtype=tf.float32)>>> T1<tf.Tensor ‘T1_1:0’ shape=(3,) dtype=float32>>>>
Note that the value of the tensor is not revealed and for that we need to run a ‘session’.
>>> sess = tf.InteractiveSession()>>> sess.run(T1)array([1., 2., 3.], dtype=float32)>>>
Another method to get the values is to use the method ‘eval’
>>> T1=tf.constant([1.0,2.0,3.0], name=’T1', dtype=tf.float32)>>> T1.eval()array([1., 2., 3.], dtype=float32)>>>
Let us check some of the properties of the tensor:
>>> type(T1)<class ‘tensorflow.python.framework.ops.Tensor’>>>> T1.name‘T1_2:0’>>> T1.dtypetf.float32>>> T1.shapeTensorShape([Dimension(3)])
We can get the shape with the following also:
>>> T1.get_shape()TensorShape([Dimension(3)])
We can apply mathematical operations in two different ways:
i) - As usual
>>> T1=tf.constant([1.0,2.0,3.0], name=’T1', dtype=tf.float32)>>> T2 = 2 * T1>>> T2<tf.Tensor ‘mul:0’ shape=(3,) dtype=float32>>>> T2.eval()array([2., 4., 6.], dtype=float32)
ii) - use the methods provided :
>>> T3=tf.add(T1,T2)>>> T3.eval()array([3., 6., 9.], dtype=float32)
Let us try the same in Tensorflow 2.x
B. Tensorflow 2.0.0
There are many other methods but we will not explore those since they might have changed Tensorflow 2.0
>>> import tensorflow as tf>>> tf.__version__‘2.0.0’>>> T1=tf.constant([1.0,2.0,3.0], name=’T1', dtype=tf.float32)2019–10–19 07:55:25.747041: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA2019–10–19 07:55:25.762557: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fb432926cb0 executing computations on platform Host. Devices:2019–10–19 07:55:25.762605: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version>>> T1<tf.Tensor: id=0, shape=(3,), dtype=float32, numpy=array([1., 2., 3.], dtype=float32)>>>>
Note that here neither we need any ‘session’ or ‘eval’ method to get the value of the Tensor.
The reason is that In Tensorflow 2.0, eager execution is enabled by default we can disable it and get the normal behaviour:
>>> import tensorflow as tf>>> from tensorflow.python.framework.ops import disable_eager_execution>>> disable_eager_execution()>>> T1=tf.constant([1.0,2.0,3.0], name=’T1', dtype=tf.float32)>>> T1<tf.Tensor ‘T1:0’ shape=(3,) dtype=float32>>>> sess = tf.compat.v1.InteractiveSession()2019–10–19 08:04:34.447278: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA2019–10–19 08:04:34.461901: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fe2e3d92680 executing computations on platform Host. Devices:2019–10–19 08:04:34.461930: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version>>> sess.run(T1)array([1., 2., 3.], dtype=float32)>>> T1.eval()array([1., 2., 3.], dtype=float32)>>> tf.__version__‘2.0.0’
We can try some of the operations:
>>> T1=tf.constant([1.0,2.0,3.0], name=’T1', dtype=tf.float32)>>> T2=T1+T1>>> T2.eval()array([2., 4., 6.], dtype=float32)>>> T3=tf.add(T1,T1)>>> T2.eval()array([2., 4., 6.], dtype=float32)>>>
Just to summarise that ‘tf.constant’ does not have too many methods and a list is as follows:
For tf=2.0.0 :
[ ‘consumers’, ‘device’, ‘dtype’, ‘eval’, ‘experimental_ref’, ‘get_shape’, ‘graph’, ‘name’, ‘op’, ‘set_shape’, ‘shape’, ‘value_index’]
For tf==1.10.0
[‘consumers’, ‘device’, ‘dtype’, ‘eval’, ‘get_shape’, ‘graph’, ‘name’, ‘op’, ‘set_shape’, ‘shape’, ‘value_index’]
One new method ‘experimental_ref’ has been added in 2.0.0
B Creating tensors with ‘tf.Variable’ —
Let us try first with 1.x
>>> tf.__version__‘1.10.0’>>> V1=tf.Variable([1,2,3], dtype=tf.int32, name=’V1')>>> V1<tf.Variable ‘V1_1:0’ shape=(3,) dtype=int32_ref>>>> V1.dtypetf.int32_ref>>> V1.shapeTensorShape([Dimension(3)])>>> sess = tf.InteractiveSession()>>> sess.run(T1)array([1., 2., 3.], dtype=float32)>>> T1.eval()array([1., 2., 3.], dtype=float32)>>>
Let us try some operations:
>>> import tensorflow as tf>>> tf.__version__‘1.10.0’>>> V1=tf.Variable([1,2,3], dtype=tf.int32, name=’V1')>>> V2=tf.Variable([1,2,3], dtype=tf.int32, name=’V1')>>> V3 = V1 + V2>>> with tf.Session() as sess: sess.run(tf.global_variables_initializer())
sess.run(V3)2019–10–19 08:23:27.877361: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMAarray([2, 4, 6], dtype=int32)
Note that here ‘tf.global_variables_initializer()’ is important without this the code will not work.
One of the things with tf.Variable is that we can assign values to those see the example:
>>> import tensorflow as tf>>> tf.__version__‘1.10.0’>>> V = tf.Variable([10,20], name=”V”)>>> V<tf.Variable ‘V_1:0’ shape=(2,) dtype=int32_ref>>>> sess = tf.Session()>>> init = tf.global_variables_initializer()>>> sess.run(init)>>> print(sess.run(V))[10 20]>>> sess.run(V.assign([50,60]))array([50, 60], dtype=int32)>>> V<tf.Variable ‘V_1:0’ shape=(2,) dtype=int32_ref>
We can create a Tensorflow variable with ‘tf.get_variable’ also. See the example:
>>> import tensorflow as tf>>> tf.__version__‘1.10.0’>>> V=tf.get_variable(“V”,[1,2])>>> V<tf.Variable ‘V:0’ shape=(1, 2) dtype=float32_ref>>>> sess = tf.Session()2019–10–19 13:06:48.139282: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA>>> init = tf.global_variables_initializer()>>> sess.run(init)>>> print(sess.run(V))[[ 1.3863872 -0.62589854]]>>> sess.run(V.assign([[50.0,60.0]]))array([[50., 60.]], dtype=float32)>>> type(V)<class ‘tensorflow.python.ops.variables.Variable’>>>>
Let us try the same exercise in 2.x
>>> import tensorflow as tf>>> from tensorflow.python.framework.ops import disable_eager_execution>>> tf.__version__‘2.0.0’>>> disable_eager_execution()>>> V=tf.Variable([1,2,3], dtype=tf.int32, name=’V’)WARNING:tensorflow:From /anaconda3/envs/codx_env/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.Instructions for updating:If using Keras pass *_constraint arguments to layers.>>> V<tf.Variable ‘V:0’ shape=(3,) dtype=int32>>>> init = tf.compat.v1.global_variables_initializer()>>> sess = tf.compat.v1.InteractiveSession()2019–10–19 08:42:53.284044: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA2019–10–19 08:42:53.298365: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f82d986f290 executing computations on platform Host. Devices:2019–10–19 08:42:53.298404: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version>>> sess.run(init)>>> sess.run(V)array([1, 2, 3], dtype=int32)>>>
There are many details. Let us summarise :
a) In 2.0.0 ‘eager execution’ is enabled by defualt so we will disable that with:
>>> from tensorflow.python.framework.ops import disable_eager_execution>>> disable_eager_execution()
(b) Session is created with:
sess = tf.compat.v1.InteractiveSession()
We must create and run the initialiser before we run the actual graph:
init = tf.compat.v1.global_variables_initializer()sess.run(init)
Let us try some operations:
>>> import tensorflow as tf>>> tf.__version__‘2.0.0’>>> from tensorflow.python.framework.ops import disable_eager_execution>>> disable_eager_execution()>>> V1=tf.Variable([1,2,3], dtype=tf.int32, name=’V1')>>> V2=tf.Variable([1,2,3], dtype=tf.int32, name=’V2')>>> init = tf.compat.v1.global_variables_initializer()>>> with tf.compat.v1.Session() as sess: init.run()
sess.run(V3)array([2, 4, 6], dtype=int32)>>>
Tensorflow is much more complex and hard to understand mainly due to two reasons: (1) static computational graph model and (2) the way it is being developed — in particular changes between 1.x and 2.x. There have been a lot of changes from Tensorflow 1.x to 2.x and the methods with ‘tf’ have been suppressed. In place of that 2.x have something like : ‘tensorflow.compat.v1’ and ‘ tensorflow.compat.v2’ which have some of the methods which were with ‘Tensorflow’ 1.x and one can use something like :
import tensorflow.compat.v1 as tf
or
import tensorflow.compat.v2 as tf
to get similar behaviour.
B Creating tensors with ‘tf.placeholder’ —
For 1.x
>>> import tensorflow as tf>>> tf.__version__‘1.10.0’>>> V1 = tf.placeholder(tf.float32, shape=(2, 2), name=’V1')>>> V2 = tf.placeholder(tf.float32, shape=(2, 2), name=’V2')>>> V = V1 + V2>>> sess = tf.Session()2019–10–19 13:38:15.828666: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA>>> init = tf.global_variables_initializer()>>> sess.run(init)>>> print(sess.run(V, feed_dict={V1: [[2,3],[4,5]], V2:[[1,2],[5,6]]}))[[ 3. 5.][ 9. 11.]]>>> type(V1)<class ‘tensorflow.python.framework.ops.Tensor’>>>> V.shapeTensorShape([Dimension(2), Dimension(2)])>>>
Example for tf=2.x
>>> import tensorflow.compat.v1 as tf>>> from tensorflow.python.framework.ops import disable_eager_execution>>> disable_eager_execution()>>> tf.__version__‘2.0.0’>>> V1 = tf.placeholder(tf.float32, shape=(2, 2), name=’V1')>>> V2 = tf.placeholder(tf.float32, shape=(2, 2), name=’V2')>>> V = V1 + V2>>> sess = tf.Session()2019–10–19 13:42:13.675571: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA2019–10–19 13:42:13.690869: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f8e0c32c070 executing computations on platform Host. Devices:2019–10–19 13:42:13.690896: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version>>> init = tf.global_variables_initializer()>>> sess.run(init)>>> print(sess.run(V, feed_dict={V1: [[2,3],[4,5]], V2:[[1,2],[5,6]]}))[[ 3. 5.][ 9. 11.]]>>> type(V1)<class ‘tensorflow.python.framework.ops.Tensor’>>>> V.shapeTensorShape([2, 2])>>>
Note that ‘tf.placeholder’ is not supported in ‘eager execution’ mode.
If you find the article useful please like and share and post your comment if you have any. In the text article I will give more examples of Tensorflow tensors.