Decoding Tensors 2 — PyTorch

Jayanti prasad Ph.D
5 min readSep 9, 2019

--

In the last part on tensors we discussed what a tensor is and why we need it. Now in this part we will discuss how a tensor object is used in one of the most popular machine/deep learning frameworks — named Torch or PyTorch. In one of the next parts the same will be repeated for tensorflow also.

Just as we can pass variables to mathematical functions and carry out operations on them such as addition, multiplication etc., the same we can do on tensors also.

In Python’s ecosystems tensors are objects and so have properties and methods (operations which can be done on them).

Some of the most common properties of Torch tensor are its ‘shape’,
‘type’ and ‘requires_grad’ (whether it requires gradient or not — will be discussed below).

This tutorial has the following sections:

1. Creating a tensor
2. Transforming a tensor
3. Algebraic Operations
4. Matrix Operations
5. Requires Grad

So Let us begin.

1. Creating a PyTorch tensor

A tensor object can be created from non tensor objects in one of the following ways (please note that there may be more ways but the following ones are the most common).

a) Creating from the data:

>>>import torch 
>>>T=torch.tensor([[1,2,3], [4,5,6]], dtype=torch.int32)

These two lines (when we type in python terminal) create a tensor ‘T’ which
has all of its element integers. The rank/shape of the tensor can be found with :

>>>T.shape
torch.Size([2, 3])

We can get the same with ‘size’ also:

>>>T.size()
torch.Size([2, 3])

The type of tensor elements can be found with:

>>>T.dtype
torch.int32

b) Creating from list:

Although this method is not encouraged because list items can be of different type but when they are not we can create a torch tensor with list also.

>>>L = [ float(i) for i in range(-10,10)]
>>>T = torch.tensor(L)

In the first line we create a list of float numbers in range [-10,10] and in the second line create a tensor T out of that.

c) Creating from numpy arrays :

The real world we live in is made of numpy arrays — we inhale and exhale numpy objects ! This method may be considered the most common one to create a torch tensor.

>>>import numpy as np 
>>>X=np.array([[1,2,3],[4,5,6]], dtype=int)
>>>T=torch.from_numpy(X)
>>>type(X)
<class ‘numpy.ndarray’>
>>>type(T)
<class ‘torch.Tensor’>
>>>X.shape
(2, 3)
>>>T.shape
torch.Size([2, 3])

d) Creating with inbuilt functions:

Just like numpy has functions to create numpy arrays torch also has
similar functions. We can create tensor with ‘torch.randn’, ‘torch.zeros’
and ‘torch.ones’ functions in the following way:

>>>T1=torch.randn(3,4)
>>>T1.shape

torch.Size([3, 4])

>>>T2 = torch.zeros([2,3])
>>>T2.shape

torch.Size([2, 3])

>>>T3 = torch.ones([4,5])
>>>T3.shape

torch.Size([4, 5])

e) Creating with repeat:

This is not exactly the way to create a torch tensor with some non tensor
objects but this method can be used to create bigger tensors from smaller tensors.

>>>x = torch.tensor([[1,2],[3,4]])
>>>y = x.repeat([4,6])

>>>x.shape
torch.Size([2, 2])
>>>y.shape
torch.Size([8, 12])

2. Transforming a PyTorch Tensor

In many cases we need to transform tensors before carrying out operations on them, mainly because some operations accept only certain type (shape) of tensors. Some of the important transformations are as follows:

a) Reshape :

>>>T1.shape
torch.Size([3, 4])

We can reshape T1 with

>>>T2 = T1.reshape(2,6)
T2.shape

torch.Size([2, 6])

Here we have converted a tensor of shape [3,4] to [2, 6]. Note that
the number of elements must be the same.

b) View :

Sometime it is useful to let certain dimensions to be determined automatically
in reshaping and that can be done with ‘view’:

>>>T1.shape

torch.Size([3, 4])

>>>T3 = T1.view(-1,6)
>>>T3.shape

torch.Size([2, 6])

c) Flatten :

In many cases we need to flatten a tensor and that can be done with the following:

>>> T=torch.randn([2,4])
>>> T.shape


>>>torch.Size([2, 4])

>>>T1 = T.flatten()
>>>T1.shape

torch.Size([8])


>>> T1
tensor([-0.1750, -0.3075, -0.5173, -0.4692, 1.4873, -1.4829, -0.5283, -0.6822])
>>> T
tensor([[-0.1750, -0.3075, -0.5173, -0.4692],
[ 1.4873, -1.4829, -0.5283, -0.6822]])

d) To List :

This creates a list out of a tensor unlike ‘flatten’ which creates a tensor:

>>> T.shape
torch.Size([2, 4])
>>> L = T.tolist()
>>> type(T)
<class ‘torch.Tensor’>
>>> type(L)
<class ‘list’>
>>> len(L)
2
>>> L
[[-0.17498110234737396, -0.30754098296165466, -0.5173395872116089,
-0.4692331552505493], [1.487285852432251, -1.4829455614089966, -0.5282540917396545, -0.6821660995483398]]

e) To numpy arrays :

As we have discussed that we can create torch tensors with numpy array so

we can convert a tensor back to a numpy array in the following way:

>>> T=torch.randn(2,3)>>> type(T)<class ‘torch.Tensor’>>>> T.shapetorch.Size([2, 3])>>> X=T.numpy()>>> type(X)<class ‘numpy.ndarray’>>>> X.shape(2, 3)

3. Algebraic Operations:

a) Addition:

We can add two tensors in the following three ways:

i) simple add

>>> T1=torch.randn([2,3])
>>> T2=torch.randn([2,3])
>>> T1.shape
torch.Size([2, 3])
>>> T2.shape
torch.Size([2, 3])
>>> T3=T1+T2

ii) add method :

>>> T3.shape
torch.Size([2, 3])
>>> T4=T1.add(T2)
>>> T4.shape
torch.Size([2, 3])

iii) add function:

>>> T5=torch.add(T1,T2)
>>> T5.shape
torch.Size([2, 3])

All of the above are true for subtraction also the only thing we have to do is to change the sign of T2.

The above three type of operations hold for multiplication also for which we
have to replace ‘add’ with ‘mul’.

Note that for all the operations discussed above ‘T1’ and ‘T2’ must be
of exactly the same type and operations are done element by element.

4. Matrix operations:

We know that matrix operations are very conservative and dimensionality
must be strictly matched. The same is true for tensors also.

Let us create three tensors :

>>> T1=torch.randn([2,3])
>>> T2=torch.randn([2,3])
>>> T3=torch.randn([3,2])

Now try multiplying T1 & T2

>>> T4=torch.matmul(T1,T2)
Traceback (most recent call last):
File “<stdin>”, line 1, in <module>
RuntimeError: size mismatch, m1: [2 x 3], m2: [2 x 3] at
/Users/administrator/nightlies/pytorch-1.0.0/wheel_build_dirs/wheel_3.6/pytorch/aten/src/TH/generic/THTensorMath.cpp:940

We got the error because dimensionality are not compatible.

Now try this :

>>> T4=torch.matmul(T1,T3)
>>> T4.shape
torch.Size([2, 2])

This works ! We can do the same with following also :

>>> T4=T1.matmul(T3)
>>> T4.shape
torch.Size([2, 2])
>>>

5. Requires grad

One of the interesting properties of pytorch tensor is autograd -
basically when we create a pytorch tensor we can specify whether
we want gradient with respect to its components of any scalar quantity
created from this tensor.

Create a random tensor:

>>> T=torch.randn(100)

This tensor does not require grad and we can confirm it:

>>> T.requires_grad
False

Now if we set its grad True:

>>> T.requires_grad=True
>>> T.requires_grad

True

and create any scalar quantity with this tensor. Let us compute mean :

>>> y=T.mean()

Now if we check for T.grad we do not get anything:

>>> T.grad.shape
Traceback (most recent call last):
File “<stdin>”, line 1, in <module>
AttributeError: ‘NoneType’ object has no attribute ‘shape’

So far it is none. Now we can compute the grad with :

>>> y.backward()
>>> T.grad.shape
torch.Size([100])

Which has the same dimensionality as the original tensor as was
expected.

All the magic was done by the method ‘backward’ ! We end this tutorial here.

In the next part we will discuss how to pass and get tensors from functions.

If you like the article please like & share and if have comments then post below.

--

--

Jayanti prasad Ph.D
Jayanti prasad Ph.D

Written by Jayanti prasad Ph.D

Physicist, Data Scientist and Blogger.

No responses yet