The fact that data is constant can be exploited to yield improved kernel function performance. The GPU hardware provides several caches or memory types that are designed for this purpose:
Constant memory: NVIDIA GPUs provide 64KB of constant memory that is treaded differently from standard global memory. In some situations, using constant memory instead of global memory may reduce the memory bandwidth (which is beneficial for kernels). Constant memory is also most effective when all threads access the same value at the same time (i.e. the array index is not a function of the position).
Texture memory: texture memory is yet another type of read-only memory. With the advent of CUDA, the GPU’s sophisticated texture memory can also be used for general-purpose computing. Although originally designed for OpenGL and DirectX rendering, texture memory has properties that make it very useful for computing purposes. Like constant memory, texture memory is cached on chip, so it may provide higher effective bandwidth than obtained when accessing the off-chip DRAM. In particular, texture caches are designed for memory access patterns exhibiting a great deal of spatial locality.
For practical purposes, the size of the constant memory is rather small, so it is mostly useful for storing filter coefficients and weights that do not change while the kernel is executed. On the other hand, the texture memory is quite large, has its own cache, and can be used for storing constant input signals/images.
The drawback is that the texture memory is separate from the global memory and can not be written. As such, global memory needs to be copied to texture memory before the kernel is started. This way, sometimes two separate copies of the data need to be stored in the GPU memory. Luckily, for Kepler GPUs, there is a solution (see further 'hwtex_const
).
In Quasar, constant/texture memory can be utilized by adding modifiers to the kernel function parameter types. The following modifiers are available:
'hwconst
: the vector/matrix needs to be stored in the constant memory. Note: if there is not enough constant memory available, a run-time error is generated!
'hwtex_nearest
or 'hwtex_linear
: the vector/matrix needs to be stored in the texture memory
'hwtex_const
- non-coherent texture cache. This option requires CUDA compute architecture 3.5 or higher - as in Geforce GPUs of the 900 series, and allows the data still be stored in the global memory, will utilizing the texture cache for load operations. This combines the advantages of the texture memory cache with the flexibility (ability to read/write) of the global memory. Note: internally, hwtex_const
is implemented using the __ldg()
intrinsic.
Note that global memory accesses (i.e., without 'hw*
modifiers) are cached in L2. For Kepler GPU devices, using 'hwtex_const
the texture cache is utilized directly, bypassing the L2 cache. The texture cache is a separate cache with a separate memory pipeline and relaxed memory coalescing rules, which may bring advantages to bandwidth-limited kernels. Source: Kepler tuning guide.
Starting with Maxwell GPU devices, the L1 cache and the texture caches are unified. The unified L1/texture cache coalesces the memory accesses, gathering up the data requested by the threads in a warp, before delivering the data to the warp. Source: Maxwell tuning guide.
Correspondingly, some optimization tips are:
When your kernel function is using some constant vectors (weight vectors with relatively small length), and when all threads access the same value of the vector at the same time (the index is not a function of the position!), you should definitely consider using 'hwconst
.
When your kernel function is accessing constant images (vec
, mat
or cube
) on Kepler/Maxwell devices with compute architecture >= 3.5 (check quasar --version
), it may be worthful to use hwtex_const
.
Because of the L2 cache, memory coalescing and various bandwidth factors, please don’t use 'hwconst
, 'hwtex_const
blindly. The best is to investigate whether the modifier improves the performance (e.g. using the Quasar profiler, it is even possible to make comparison profiles). Only keep the modifiers when they improve the performance.
In the future, the Quasar compiler may be able to add 'hwconst
, 'hwtex_const
automatically. For now, lets start testing and understanding when these modifiers are beneficial!
As an example, consider the following convolution program:
Default version with no constant memory being used:
function [] = __kernel__ kernel(x : vec, y : vec, f : vec, pos : int)
sum = 0.0
for i=0..numel(f)-1
sum += x[pos+i] * f[i]
endfor
y[pos] = sum
endfunction
Version with constant memory:
function [] = __kernel__ kernel_hwconst(x : vec, y : vec, f : vec'hwconst, pos : int)
sum = 0.0
for i=0..numel(f)-1
sum += x[pos+i] * f[i]
endfor
y[pos] = sum
endfunction
Version with constant texture memory for f
:
function [] = __kernel__ kernel_hwtex_const(x : vec, y : vec, f : vec'hwtex_const, pos : int)
sum = 0.0
for i=0..numel(f)-1
sum += x[pos+i] * f[i]
endfor
y[pos] = sum
endfunction
Version with constant texture memory for x
and f
:
function [] = __kernel__ kernel_hwtex_const2(x : vec'hwtex_const, y : vec, f : vec'hwtex_const, pos : int)
sum = 0.0
for i=0..numel(f)-1
sum += x[pos+i] * f[i]
endfor
y[pos] = sum
endfunction
Version with HW textures:
function [] = __kernel__ kernel_tex(x : vec, y : vec, f : vec'hwtex_nearest, pos : int)
sum = 0.0
for i=0..numel(f)-1
sum += x[pos+i] * f[i]
endfor
y[pos] = sum
endfunction
Timing code:
x = zeros(2048^2)
y = zeros(size(x))
f = 0..31 % dummy filter coefficients
tic()
for k=0..99
parallel_do(size(y),x,y,f,kernel)
endfor
toc("Default")
tic()
for k=0..99
parallel_do(size(y),x,y,f,kernel_hwconst)
endfor
toc("'hwconst")
tic()
for k=0..99
parallel_do(size(y),x,y,f,kernel_hwtex_const)
endfor
toc("'hwtex_const")
tic()
for k=0..99
parallel_do(size(y),x,y,f,kernel_hwtex_const2)
endfor
toc("'hwtex_const2")
tic()
for k=0..99
parallel_do(size(y),x,y,f,kernel_tex)
endfor
toc("'hwtex_nearest")
Results for the NVidia Geforce 980 (Maxwell architecture):
Default: 513.0294 ms
'hwconst: 132.0075 ms
'hwtex_const: 128.0074 ms
'hwtex_const: 95.005 ms
'hwtex_nearest: 169.0096 ms
It can be seen that using constant memory alone yields a speed-up of more than factor 5(!). The best performance is obtained with hwtex_const
, which is close to the performance of 'hwconst
.
Note, further improvement can be made using shared memory, for which we reach 85 ms in this case.