How to find version of CUDA

Users of CUDA usually know its major.minor version.

If you want to know the full major.minor.patch version of CUDA you are using:

$ cat /usr/local/cuda/version.txt

For example, when I tried on my CUDA 9.2 installation:

$ cat /usr/local/cuda-9.2/version.txt                                                                                                                                                                        
CUDA Version 9.2.148
Advertisements

OpenCV CUDA CMake error

Problem

I was building OpenCV using CMake and got this error:

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
opencv_dep_CUDA_nppi_LIBRARY

Solution

The error was related to CUDA and probably some dependency on the NVIDIA Performance Primitives (NPP) library. Since, I needed neither, I rebuilt OpenCV without CUDA support as described here by setting WITH_CUDA=OFF.

Stub library warning on libnvidia-ml.so

Problem

I tried to run a program compiled with CUDA 9.0 inside a Docker container and got this error:

WARNING:

You should always run with libnvidia-ml.so that is installed with your
NVIDIA Display Driver. By default it's installed in /usr/lib and /usr/lib64.
libnvidia-ml.so in GDK package is a stub library that is attached only for
build purposes (e.g. machine that you build your application doesn't have
to have Display Driver installed).

Solution

Let us first try to understand the error and where it is coming from. The program compiled with CUDA 9.0 has been linked to libnvidia-ml.so. This is the shared library file of the NVIDIA Management Library (NVML). During execution, libnvidia-ml.so is throwing this error. Why?

From the error message, we get an indication that there are two libnvidia-ml.so files. One is a stub that is used during compilation and linking. I guess it just provides the necessary function symbols and signatures. But that library cannot be used to execute the compiled executable. If we do try to execute with that stub shared library file, it will throw this warning.

So, there is a second libnvidia-ml.so, the real shared library file. It turns out that the management library is provided by the NVIDIA display driver. So, every version of display driver will have its own libnvidia-ml.so file. I had NVIDIA display driver 384.66 on my machine and I found libnvidia-ml.so under /usr/lib/nvidia-384. The stub library file allows you to compile on machines where the NVIDIA display driver is not installed. In our case, for some reason, the loader is picking up the stub instead of the real library file during execution.

By using the chrpath tool, described here, I found that the compiled binary did indeed have the stub library directory in its path:/usr/local/cuda/lib64/stubs. That directory did have a libnvidia-ml.so. Using the strings tool on that shared library, confirmed that it was the origin of the above message:

$ strings libnividia-ml.so | grep "You should always run with"

Since the binary has an RPATH, described here, with the stubs path, the stub library was getting picked up with high preference over the actual libnvidia-ml.so, which was present in . The solution I came up with for this problem was to add a command to the docker run invocation to delete the stubs directory:

$ rm -rf  /usr/local/cuda/lib64/stubs

That way, it was still available outside Docker for compilation. It would just appeared deleted inside the Docker container, thus forcing the loader to pick up the real libnvidia-ml.so during execution.

How to install CUDA for NVIDIA GTX 1050 (Notebook)

Installing NVIDIA graphics drivers on Linux has never been easy for me! I bought a notebook with NVIDIA GTX 1050 GPU recently and installed Kubuntu 16.04. I had to wait for more than a month for NVIDIA to release drivers that supported the notebook 1050 variant.

  • Once the driver was released, I downloaded the .run file directly from NVIDIA’s website here. I ran the installation:
$ sudo sh NVIDIA-Linux-x86_64-381.22.run

When I rebooted, I got a black screen! Not surprising with NVIDIA and Linux! I had to uninstall it to get back to work:

$ sudo sh NVIDIA-Linux-x86_64-381.22.run --uninstall
  • After another month, I found that the latest NVIDIA driver supporting the notebook 1050 was available from Ubuntu. So, I tried installing that:
$ sudo apt install nvidia-381

Reboot and I got a new error message in a GUI dialog box:

The system is running in low-graphics mode
Your screen, graphics card, and input device settings could not be detected correctly.
You will need to configure these yourself.

I had to uninstall it to get back to work:

$ sudo apt purge nvidia-381
  • It finally dawned on me that what I really wanted was to be able to run CUDA programs on the GPU. I did not really care about X or games being able to use the GPU. So, I went back to the .run driver and installed it without OpenGL:
$ sudo sh NVIDIA-Linux-x86_64-381.22.run --no-opengl-files

After rebooting, I found that I still had a desktop. That was a big relief! I proceeded to download and install CUDA:

$ sudo sh cuda_8.0.61_375.26_linux.run

I took care to not install the graphics driver that comes along with the CUDA installer. That is it! I was able to compile and run the CUDA samples. Running ./deviceQuery from the samples showed the GTX 1050 and that is all I wanted! πŸ™‚

How to make CUDA and nvidia-smi use same GPU ID

One of the irritating problems I encounter while working with CUDA programs is the GPU ID. This is the identifier used to associate an integer with a GPU on the system. This is just 0 if you have one GPU in the computer. But when dealing with a system having multiple GPUs, the GPU ID that is used by CUDA and GPU ID used by non-CUDA programs like nvidia-smi are different! CUDA tries to associate the fastest GPU with the lowest ID. Non-CUDA tools use the PCI Bus ID of the GPUs to give them a GPU ID.

One solution that I was using was to use cuda-smi that shows GPU information using CUDA GPU IDs.

There is a better solution: requesting CUDA to use the same PCI Bus ID enumeration order as used by non-CUDA programs. To do this set the CUDA_​DEVICE_​ORDER environment variable to PCI_BUS_ID in your shell. The default value of this variable is FASTEST_FIRST. More info on this can be found here. Note that this is available only in CUDA 7 and later.

Thrust error on min_element or max_element

Problem

I was compiling some old CUDA code with a recent version of the CUDA SDK. I got these errors on Thrust methods:

error: namespace thrust has no member max_element
error: namespace thrust has no member min_element

Solution

In recent versions of CUDA SDK, these Thrust methods have been moved to the extrema.h header file. These errors will go away if you include the thrust/extrema.h header file.

Tried with: CUDA SDK 7.5 and Ubuntu 14.04

How to use PhysX on Linux

PhysX is a 3D game physics engine provided by NVIDIA. They have released the source code of the engine on Github, though with restricted access. This library and its sample programs can be compiled from source easily.

Here are my notes on how to compile this library, run its sample tools and get started:

  • You need a Linux computer (I use Ubuntu), with a fairly modern NVIDIA graphics card.

  • Make sure you have installed recent NVIDIA graphics drivers and a recent version of CUDA on it. Ensure that these are working correctly before trying PhysX.

  • Go to this webpage and jump through their hoops to get access to the PhysX Github page. Essentially, NVIDIA requires you to create a login with them and after that they give your Github login the access to their PhysX source code.

  • Once you have access to PhysX source code, clone its repository to your computer:

$ git clone https://github.com/NVIDIAGameWorks/PhysX-3.3.git
  • The documentation in the source code is outdated and is misleading. This is how the source code is laid out: Source for source code, Snippets and Samples have small programs to try PhysX. Once you have compiled these Snippets and Samples, their binaries will be placed in Bin/linux64.

  • Each of the above three code directories has a compiler/linux64 directory which holds the Makefiles to build them. There are four build profiles available: release, profile, checked and debug. Just invoking make builds all four versions. To build just the release versions, I did make release in all the three code directories.

  • Once the library and its snippets and samples are built, you can try these programs from Bin/linux64. For example, the samples program allows you try many of the features of the engine in an interactive GUI.

Tried with: PhysX 3.3.4, NVIDIA GeForce GTX 750 Ti and Ubuntu 14.04

CMake fails looking for CUDA_TOOLKIT_ROOT_DIR

Problem

You have installed CUDA and try to compile a CUDA program using a CMake, which fails with this error:

$ cmake ..
CMake Error at /usr/share/cmake-2.8/Modules/FindCUDA.cmake:548 (message):
  Specify CUDA_TOOLKIT_ROOT_DIR
Call Stack (most recent call first):
  CMakeLists.txt:3 (find_package)

Solution 1

FindCUDA.cmake is trying to find your CUDA installation directory and failing. I had installed CUDA 7.0 on this machine, which was in /usr/local/cuda-7.0. However, CMake looks for /usr/local/cuda. The CUDA installer is supposed to create a symbolic link /usr/local/cuda pointing to that actual installation directory.

That symbolic link was not there on this computer. This can sometimes happen when you have two CUDA installations and remove one of them. The one removed takes out the symbolic link with it. I had CUDA 6.5 and CUDA 7.0 on this computer before I removed CUDA 6.5.

Anyway, we now know how to fix this:

$ sudo ln -s /usr/local/cuda-7.0 /usr/local/cuda

Solution 2

Pass the CUDA installation directory to the CUDA_TOOLKIT_ROOT_DIR variable directly during the invocation of CMake:

$ cmake -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-7.0 ..

Tried with: CUDA 7.0 and Ubuntu 14.04

Syntax highlighting for CUDA in Vim

Problem

Vim has had syntax highlighting support for CUDA source files for a long time now. You can check this: open any .cu file and try the command :set filetype?. You will see that Vim knows that it is a CUDA source file. It applies the syntax highlighting from the file /usr/share/vim/vim74/syntax/cuda.vim.

So, what is the problem? The syntax highlighting done by Vim for CUDA is very very minimal. My CUDA source files look like pages of plain white text in Vim! Also, the comments in the cuda.vim file that it was last updated in 2007. Now that is old!

Solution 1

There is an alternate cu.vim or cuda.vim syntax file for Vim floating around on the web. You can get it, for example, from here.

I replaced the cuda.vim that ships with Vim with this one and found it slightly better. But not by much. It still looks like lots of plain text.

Solution 2

The better solution for me was to just syntax highlight CUDA file as C++ file. This gave the best results, with the highest number of elements in the file being colored when compared to above two methods.

To do this, add this line to your .vimrc:

autocmd BufRead,BufNewFile *.cu set filetype=cpp

Tried with: Vim 7.4 and Ubuntu 14.04