Skip to main content

Pytorch Tensors: Step-by-Step Guide for Beginners

Introduction to Pytorch Tensors

In PyTorch, a popular open-source deep learning framework, tensors are multi-dimensional arrays used to represent data. PyTorch tensors are similar to NumPy arrays but with the added advantage of GPU acceleration for numerical computations, making them particularly well-suited for deep learning tasks.

Here are some key points about tensors in PyTorch:

  1. Multi-dimensional arrays: Tensors in PyTorch can have any number of dimensions. They can be scalars (0-dimensional), vectors (1-dimensional), matrices (2-dimensional), or higher-dimensional arrays.

  2. Data types: PyTorch tensors can store data of different types such as integers, floats, and booleans. Additionally, PyTorch provides support for both CPU and GPU tensors, allowing for efficient computation on GPU devices.

  3. Automatic differentiation: PyTorch tensors enable automatic differentiation, which is a key feature for training deep learning models. This means that PyTorch can automatically compute gradients with respect to tensor operations, making it easy to implement and optimize neural networks.

  4. Creation and manipulation: PyTorch provides a wide range of functions for creating and manipulating tensors, similar to NumPy. Users can create tensors from Python lists, NumPy arrays, or directly with built-in functions like torch.tensor() or torch.zeros(). Tensors can also be reshaped, transposed, concatenated, sliced, and more.

  5. Integration with deep learning: PyTorch tensors serve as the fundamental data structure for building and training neural networks. Deep learning models in PyTorch are typically composed of layers that operate on tensors, and the gradients of these tensors are computed during the training process using techniques like backpropagation.

Let's start by importing the Pytorch library,
import torch
torch.__version__
'2.2.1+cu121'

Different types of tensors

  • Scalers
  • Vectors
  • Matrix
  • N-dimentional Tensors

Scaler

A scalar tensor is a fundamental data structure in tensor-based frameworks like PyTorch, representing a single numerical value without any dimensions or axes. It is the simplest form of tensor, analogous to a scalar in mathematics, and is typically used to store and manipulate individual numerical quantities within computations.

#Scaler
s1 = torch.tensor(10)
print(s1)
tensor(10)
print(s1.ndim)
print(s1.item())
0
10

Vector

A vector tensor is a multi-dimensional array in tensor-based frameworks like PyTorch, characterized by having one dimension or axis. It represents a collection of numerical values arranged in a single sequence, akin to a mathematical vector. Vector tensors are commonly used to store and process sets of related data points or features within machine learning and scientific computations.

#Vector
vec1 = torch.tensor([10,10])
print(vec1)
tensor([10, 10])
print(vec1.ndim)
print(vec1.shape)
print(vec1.size())
1
torch.Size([2])
torch.Size([2])

MATRIX

A matrix tensor is a multi-dimensional array in tensor-based frameworks like PyTorch, characterized by having two dimensions or axes. It represents a rectangular grid of numerical values arranged in rows and columns, similar to a mathematical matrix. Matrix tensors are widely used in various computational tasks, including linear algebra operations, image processing, and neural network computations, among others.

#MATRIX
M1 = torch.tensor([[10,20,30],
                   [40,50,60],
                   [70,80,90]])
print(M1)
tensor([[10, 20, 30],
        [40, 50, 60],
        [70, 80, 90]])
print(M1.ndim)
print(M1.shape)
print(M1.size())
2
torch.Size([3, 3])
torch.Size([3, 3])
print(M1[0])
print(M1[0,1])
tensor([10, 20, 30])
tensor(20)

Ndim Tensor

An ndim type tensor, also known as an N-dimensional tensor, is a multi-dimensional array in tensor-based frameworks like PyTorch. Unlike scalar, vector, or matrix tensors, which have specific numbers of dimensions (0, 1, and 2 respectively), an ndim tensor can have an arbitrary number of dimensions, making it highly flexible for representing complex data structures. It can store and process data in multiple dimensions, enabling applications across a wide range of fields such as computer vision, natural language processing, and scientific computing.

#ndim TENSOR
T1 = torch.tensor([[[1,2,3],
                    [3,6,9],
                    [3,5,7]]])

print(T1)
tensor([[[1, 2, 3],
         [3, 6, 9],
         [3, 5, 7]]])
print(T1.ndim)
print(T1.shape)
3
torch.Size([1, 3, 3])
print(T1[0])
print(T1[0,0])
print(T1[0,0,0])
tensor([[1, 2, 3],
        [3, 6, 9],
        [3, 5, 7]])
tensor([1, 2, 3])
tensor(1)

02# Creating Random Tensors in Pytorch

Random tensors play a crucial role in various machine learning and deep learning tasks, providing a means to initialize model parameters, generate synthetic data, and introduce randomness into training processes. In PyTorch, a powerful open-source deep learning framework, generating random tensors is made simple and flexible through a range of built-in functions and utilities. These functions allow users to create tensors of specified shapes and data types filled with random values drawn from different distributions. Whether initializing weights in neural networks, augmenting datasets for training, or exploring probabilistic models, understanding how to leverage random tensors in PyTorch is essential for practitioners aiming to build robust and adaptable machine learning systems. This introductory paragraph sets the stage for delving deeper into the mechanisms and applications of random tensors within the PyTorch ecosystem.

## Random Tensors
import torch
rt1 = torch.rand(4,5)
print(rt1)
tensor([[0.5500, 0.8720, 0.6395, 0.0799, 0.3020],
        [0.2502, 0.4281, 0.4151, 0.6077, 0.0221],
        [0.2506, 0.8002, 0.6457, 0.7013, 0.6463],
        [0.8175, 0.3537, 0.3109, 0.0280, 0.8454]])
print(rt1.shape)
print(rt1.ndim)
torch.Size([4, 5])
2
rt2 = torch.rand(1,5,5)
print(rt2)
tensor([[[0.4416, 0.3939, 0.2921, 0.0261, 0.7478],
         [0.1550, 0.2993, 0.3356, 0.1607, 0.4795],
         [0.5784, 0.2375, 0.3055, 0.1671, 0.1477],
         [0.1063, 0.7390, 0.8740, 0.5362, 0.8068],
         [0.1905, 0.6792, 0.9476, 0.1836, 0.1842]]])
rt3 = torch.rand(3,5,5)
print(rt3)
tensor([[[0.8163, 0.0872, 0.6377, 0.1488, 0.9956],
         [0.7389, 0.9459, 0.7823, 0.0643, 0.3613],
         [0.4636, 0.8717, 0.4562, 0.4800, 0.2178],
         [0.7341, 0.2803, 0.4451, 0.8387, 0.7366],
         [0.3267, 0.8672, 0.1911, 0.2855, 0.5340]],

        [[0.8962, 0.2618, 0.0603, 0.6335, 0.8865],
         [0.6841, 0.3350, 0.8097, 0.0948, 0.1586],
         [0.2651, 0.1407, 0.2495, 0.3107, 0.5707],
         [0.0284, 0.0631, 0.7315, 0.7250, 0.9454],
         [0.2690, 0.2595, 0.3368, 0.8700, 0.6503]],

        [[0.7131, 0.4645, 0.5785, 0.3686, 0.3134],
         [0.5476, 0.7581, 0.9991, 0.6759, 0.1422],
         [0.2117, 0.4916, 0.1462, 0.9029, 0.5887],
         [0.4446, 0.1713, 0.8241, 0.1774, 0.2027],
         [0.4950, 0.6531, 0.6837, 0.2710, 0.5333]]])
print(rt3.ndim)
3
rt_image = torch.rand(size=(512,512,3))
#print(rt_image)
print(rt_image.size())
print(rt_image.shape)
print(rt_image.ndim)
torch.Size([512, 512, 3])
torch.Size([512, 512, 3])
3

3# Creating Tensors With Zeros and Ones in PyTorch

Zeros and ones tensors serve as fundamental building blocks in various machine learning and deep learning tasks, allowing practitioners to initialize tensors with predetermined values of either all zeros or all ones. In PyTorch, a leading open-source deep learning framework, these tensors are easily created using dedicated functions and utilities. Zeros tensors are commonly employed for initializing weight parameters in neural networks or initializing placeholders for subsequent data storage, while ones tensors are useful for similar initialization purposes or for representing uniform distributions. Understanding how to create and manipulate zeros and ones tensors in PyTorch is essential for effectively initializing models, defining placeholders, and implementing various algorithms within the framework. This introductory paragraph sets the stage for exploring the practical applications and utility of zeros and ones tensors in PyTorch-based machine learning workflows.

## Create all zeros tensor
zeros = torch.zeros(size=(4,4))
print(zeros)
print(zeros.dtype)
print(zeros.ndim)
tensor([[0., 0., 0., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 0.]])
torch.float32
2
## Create all ones tensor
ones = torch.ones(size=(4,4))
print(ones)
print(ones.dtype)
print(ones.ndim)
tensor([[1., 1., 1., 1.],
        [1., 1., 1., 1.],
        [1., 1., 1., 1.],
        [1., 1., 1., 1.]])
torch.float32
2
## Create all ones ndim tensor
ones = torch.ones(size=(3,4,4))
print(ones)
print(ones.dtype)
print(ones.ndim)
tensor([[[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]],

        [[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]],

        [[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]]])
torch.float32
3

4# Creating a Tensor Range and Tensors Like Other Tensors

Range tensors offer a powerful mechanism for generating sequences of numerical values within the PyTorch framework, facilitating tasks such as index creation, data generation, and sequence processing. In PyTorch, a leading deep learning library, range tensors are created using the torch.arange() function, enabling users to specify the start, end, and optional step size of the sequence. This flexibility allows for the creation of tensors representing a wide range of numerical sequences, including arithmetic progressions, integer sequences, and index arrays. Range tensors serve as foundational tools for tasks such as array indexing, data manipulation, and algorithmic implementations, making them indispensable for a variety of machine learning and scientific computing applications. This introductory paragraph lays the groundwork for exploring the capabilities and applications of range tensors within the PyTorch ecosystem.

import torch
## Use torch.range()
t1 = torch.arange(start=1, end=11)
print(t1)
tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10])
t2 = torch.arange(start=1, end=100, step=3)
print(t2)
tensor([ 1,  4,  7, 10, 13, 16, 19, 22, 25, 28, 31, 34, 37, 40, 43, 46, 49, 52,
        55, 58, 61, 64, 67, 70, 73, 76, 79, 82, 85, 88, 91, 94, 97])
## Creating tensor like
t1_zeros = torch.zeros_like(t1)
print(t1_zeros)
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
t2_ones = torch.ones_like(t2)
print(t2_ones)
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 1])

!!!Happy Pytorch Coding

Comments