Parallel programming in a GPU
Let’s dive together into the world of parallel computing.
Introduction
“The real world (nature) is in fact massively parallel.” Examples of which are planetary movements, galaxy formation and many more.
First, let us see what does parallel computing means. So here we go
” Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously.”One more thing which I want to add here is that concurrent computing is not parallel computing, that is a whole different thing, which is usually confused with parallel computing.
Problems with CPUs
- Temperature,
- Lithography limitations,
- Quantum Tunneling,
- Electricity travel speed.
GPUs( Graphics Processing Units)
A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.
- Ridiculously higher net computation power than CPUs.
- It can be thousands of simultaneous calculations.
Modern GPU frameworks
OpenCL — It is open, a bit harder to use, runs on both ATI and NVIDIA architecture.
CUDA — CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers can dramatically speed up computing applications by harnessing the power of GPUs. It requires a CUDA enabled GPU card.
You can check if your GPU card is CUDA-enabled from the link below.
NVIDIA realized the potential of bringing this performance to the larger scientific community and invested in modifying the GPU to make it fully programmable for scientific applications. Plus, it added support for high-level languages like C, C++, and Fortran. This led to the CUDA parallel computing platform for the GPU.
Make sure that you install the latest driver, set up the working environment correctly and get started with parallel computing with CUDA.
Thank you
Stay tuned!! bye bye!!