How to install CUDA for NVIDIA GTX 1050 (Notebook)

Installing NVIDIA graphics drivers on Linux has never been easy for me! I bought a notebook with NVIDIA GTX 1050 GPU recently and installed Kubuntu 16.04. I had to wait for more than a month for NVIDIA to release drivers that supported the notebook 1050 variant.

  • Once the driver was released, I downloaded the .run file directly from NVIDIA’s website here. I ran the installation:
$ sudo sh NVIDIA-Linux-x86_64-381.22.run

When I rebooted, I got a black screen! Not surprising with NVIDIA and Linux! I had to uninstall it to get back to work:

$ sudo sh NVIDIA-Linux-x86_64-381.22.run --uninstall
  • After another month, I found that the latest NVIDIA driver supporting the notebook 1050 was available from Ubuntu. So, I tried installing that:
$ sudo apt install nvidia-381

Reboot and I got a new error message in a GUI dialog box:

The system is running in low-graphics mode
Your screen, graphics card, and input device settings could not be detected correctly.
You will need to configure these yourself.

I had to uninstall it to get back to work:

$ sudo apt purge nvidia-381
  • It finally dawned on me that what I really wanted was to be able to run CUDA programs on the GPU. I did not really care about X or games being able to use the GPU. So, I went back to the .run driver and installed it without OpenGL:
$ sudo sh NVIDIA-Linux-x86_64-381.22.run --no-opengl-files

After rebooting, I found that I still had a desktop. That was a big relief! I proceeded to download and install CUDA:

$ sudo sh cuda_8.0.61_375.26_linux.run

I took care to not install the graphics driver that comes along with the CUDA installer. That is it! I was able to compile and run the CUDA samples. Running ./deviceQuery from the samples showed the GTX 1050 and that is all I wanted! 🙂

Advertisements

How to make CUDA and nvidia-smi use same GPU ID

One of the irritating problems I encounter while working with CUDA programs is the GPU ID. This is the identifier used to associate an integer with a GPU on the system. This is just 0 if you have one GPU in the computer. But when dealing with a system having multiple GPUs, the GPU ID that is used by CUDA and GPU ID used by non-CUDA programs like nvidia-smi are different! CUDA tries to associate the fastest GPU with the lowest ID. Non-CUDA tools use the PCI Bus ID of the GPUs to give them a GPU ID.

One solution that I was using was to use cuda-smi that shows GPU information using CUDA GPU IDs.

There is a better solution: requesting CUDA to use the same PCI Bus ID enumeration order as used by non-CUDA programs. To do this set the CUDA_​DEVICE_​ORDER environment variable to PCI_BUS_ID in your shell. The default value of this variable is FASTEST_FIRST. More info on this can be found here. Note that this is available only in CUDA 7 and later.

Thrust error on min_element or max_element

Problem

I was compiling some old CUDA code with a recent version of the CUDA SDK. I got these errors on Thrust methods:

error: namespace thrust has no member max_element
error: namespace thrust has no member min_element

Solution

In recent versions of CUDA SDK, these Thrust methods have been moved to the extrema.h header file. These errors will go away if you include the thrust/extrema.h header file.

Tried with: CUDA SDK 7.5 and Ubuntu 14.04

How to use PhysX on Linux

PhysX is a 3D game physics engine provided by NVIDIA. They have released the source code of the engine on Github, though with restricted access. This library and its sample programs can be compiled from source easily.

Here are my notes on how to compile this library, run its sample tools and get started:

  • You need a Linux computer (I use Ubuntu), with a fairly modern NVIDIA graphics card.

  • Make sure you have installed recent NVIDIA graphics drivers and a recent version of CUDA on it. Ensure that these are working correctly before trying PhysX.

  • Go to this webpage and jump through their hoops to get access to the PhysX Github page. Essentially, NVIDIA requires you to create a login with them and after that they give your Github login the access to their PhysX source code.

  • Once you have access to PhysX source code, clone its repository to your computer:

$ git clone https://github.com/NVIDIAGameWorks/PhysX-3.3.git
  • The documentation in the source code is outdated and is misleading. This is how the source code is laid out: Source for source code, Snippets and Samples have small programs to try PhysX. Once you have compiled these Snippets and Samples, their binaries will be placed in Bin/linux64.

  • Each of the above three code directories has a compiler/linux64 directory which holds the Makefiles to build them. There are four build profiles available: release, profile, checked and debug. Just invoking make builds all four versions. To build just the release versions, I did make release in all the three code directories.

  • Once the library and its snippets and samples are built, you can try these programs from Bin/linux64. For example, the samples program allows you try many of the features of the engine in an interactive GUI.

Tried with: PhysX 3.3.4, NVIDIA GeForce GTX 750 Ti and Ubuntu 14.04

CMake fails looking for CUDA_TOOLKIT_ROOT_DIR

Problem

You have installed CUDA and try to compile a CUDA program using a CMake, which fails with this error:

$ cmake ..
CMake Error at /usr/share/cmake-2.8/Modules/FindCUDA.cmake:548 (message):
  Specify CUDA_TOOLKIT_ROOT_DIR
Call Stack (most recent call first):
  CMakeLists.txt:3 (find_package)

Solution 1

FindCUDA.cmake is trying to find your CUDA installation directory and failing. I had installed CUDA 7.0 on this machine, which was in /usr/local/cuda-7.0. However, CMake looks for /usr/local/cuda. The CUDA installer is supposed to create a symbolic link /usr/local/cuda pointing to that actual installation directory.

That symbolic link was not there on this computer. This can sometimes happen when you have two CUDA installations and remove one of them. The one removed takes out the symbolic link with it. I had CUDA 6.5 and CUDA 7.0 on this computer before I removed CUDA 6.5.

Anyway, we now know how to fix this:

$ sudo ln -s /usr/local/cuda-7.0 /usr/local/cuda

Solution 2

Pass the CUDA installation directory to the CUDA_TOOLKIT_ROOT_DIR variable directly during the invocation of CMake:

$ cmake -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-7.0 ..

Tried with: CUDA 7.0 and Ubuntu 14.04

Syntax highlighting for CUDA in Vim

Problem

Vim has had syntax highlighting support for CUDA source files for a long time now. You can check this: open any .cu file and try the command :set filetype?. You will see that Vim knows that it is a CUDA source file. It applies the syntax highlighting from the file /usr/share/vim/vim74/syntax/cuda.vim.

So, what is the problem? The syntax highlighting done by Vim for CUDA is very very minimal. My CUDA source files look like pages of plain white text in Vim! Also, the comments in the cuda.vim file that it was last updated in 2007. Now that is old!

Solution 1

There is an alternate cu.vim or cuda.vim syntax file for Vim floating around on the web. You can get it, for example, from here.

I replaced the cuda.vim that ships with Vim with this one and found it slightly better. But not by much. It still looks like lots of plain text.

Solution 2

The better solution for me was to just syntax highlight CUDA file as C++ file. This gave the best results, with the highest number of elements in the file being colored when compared to above two methods.

To do this, add this line to your .vimrc:

autocmd BufRead,BufNewFile *.cu set filetype=cpp

Tried with: Vim 7.4 and Ubuntu 14.04

target_include_directories does not work with CUDA

Problem

target_include_directories is an useful CMake directive to specify the include directories for building a particular target, as described here. The FindCUDA module for CMake which handles CUDA compilation seems to completely ignore this directive. The include directories specified for the target are not passed to CUDA compilation by nvcc.

This will most commonly result in errors of the form: someheader.h: No such file or directory.

Solution

This is a well known limitation of the CUDA module of CMake, as documented here. There seems to be no plan currently to support target_include_directories for CUDA compilation.

The only solution is to switch to include_directories to add these directories for all the targets in the CMakeLists.txt file.

Tried with: CMake 2.8.12.2, CUDA 6.5 and Ubuntu 14.04

CUDA installation error of unmet dependencies

I was trying to install CUDA and got this error:

$ sudo apt-get install cuda
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 cuda : Depends: cuda-6-5 (= 6.5-14) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

On searching, a lot of folks had this error. Their errors were solved when they removed old NVIDIA packages they had installed. But, I did not have any NVIDIA package installed!

What were these broken packages I had on the system, I didn’t know! I followed the dependency chain by trying to install cuda-6-5, which complained on another package and so on. In the end, I found that I had a lot of unnecessary packages, all of which had :i386 at the end of their name.

The strangest part was that running sudo apt-get autoremove had no effect on these packages, they were not removed. So, I manually removed all of them using sudo apt-get remove.

When I tried to install CUDA after this, it worked fine! 🙂

Tried with: Ubuntu 14.04

How to install CUDA 6.5 on Ubuntu 14.04

Installing CUDA is becoming increasingly easier on Ubuntu. I think I keep hitting problems because I am usually updating from an older NVIDIA graphics driver or CUDA version. NVIDIA continues to be quite bad at providing error-free upgrades. Anyway, this is what worked for me:

  • Do not try to install any of the NVIDIA drivers or CUDA packages that are in the Ubuntu repositories. I wasted a day with the errors these operations threw up!

  • Uninstall all CUDA packages and NVIDIA drivers you may have on your Ubuntu system.

  • Download the CUDA .deb package for Ubuntu 14.04 from here. For me, it was a cuda-repo-ubuntu1404_6.5-14_amd64.deb file.

  • The .deb file just adds a CUDA repository maintained by NVIDIA. Install this .deb file and update:

$ sudo gdebi cuda-repo-ubuntu1404_6.5-14_amd64.deb
$ sudo apt-get update
  • Installing CUDA now is easy as this:
$ sudo apt-get install cuda

This is a big install, it will install everything including a nvidia-340 driver that actually worked and NVIDIA NSight. After the install, reboot the computer. Your CUDA is ready for work now 🙂

Note: I tried this on two systems. On one, it installed without any problem. On the other, it gave an error of unmet dependencies. I have described here how I solved this problem.

Tried with: NVIDIA GeForce GTS 250 and NVIDIA GTX Titan