You may sometimes want to change the UID (user ID) and GID (group ID) of your username. This can be done easily in Linux in two steps: change the ID and then change the file owner of all files with the old IDs.
For example, to change UID of joe from 1000 to 9000:
If you get any errors, please remember to logout of your desktop and kill any processes that are holding on to your username. It is best to just run these commands from a virtual terminal like Ctrl + Alt + F1.
If you want to find out the uptime or load average of a remote machine, you can always SSH to it and find it out using the uptime command. If you want to do that for a whole bunch of remote machines though, it is a bit tedious. Thankfully, there are two small utilities named rup and rsysinfo that make it easy to get such info about a bunch of remote computers easily.
Install rstatd package on each of your remote machines:
$ sudo apt install rstatd
Install rstat-client package on your local machine:
$ sudo apt install rstat-client
To find the uptime and load averages of a set of remotes:
I can print a webpage to PDF, but there are times when I need to take a screenshot of the entire length of a webpage. For example, a lot of formatting is lost when printing. So, a screenshot is better since it gives you the entire length of the webpage exactly how it is rendered in the browser. Thankfully, there is no need to install any extension to do this in Firefox.
Enable the screenshot option: Press F12 to open the Developer Tools. Click on the ⚙ (gear) icon on the right. In the Available Toolbox Buttons section, enable the Take a screenshot of the entire page option. A 📷 (camera) icon will appear to the left of the ⚙ (gear) icon. From now on, this 📷 (camera) icon will be available in Developer Tools.
Take a screenshot: Open the webpage you want to take a screenshot of. Press F12 to open Developer Tools and click the 📷 (camera) icon to take a screenshot. It will be saved in your Downloads folder as a PNG image by default.
I am happy to note that this blog CodeYarns.com has passed another milestone today: 5 million views! 😊
The last one million views have arrived in 10 months since Sep 2016. I have not actually been writing new blog posts as much as I want to. There have been only about 60 new posts since the last million milestone. But, the monthly visit counts have been slowly but steadily increasing despite this. I will try to have a more regular writing schedule in what is left of this year. Let us see how long before we cross the 6 million mark! 😊
PS: If you have been following my Twitter account, it is now @codeyarns 😈
Virtual functions are a key feature of C++ to enable runtime polymorphism. This post is my attempt in understanding how they are implemented and executed at runtime. The compiler used is GCC 5.4.0 on Ubuntu 16.04.
Here is a simple program that uses virtual functions that we will use as an example:
To aid us in understanding what this code is compiled into, we request GCC to add debugging information (using option -g) when we compile it:
$ g++ -g virtual_function_example.cpp
Almost all C++ compilers implement virtual functions by using virtual tables, more commonly called as vtables. This is a table of function addresses, one for each virtual function in the class. One virtual table is created for each class that has virtual functions.
We can see the existence of the methods and virtual tables of each class and their addresses by examining the binary:
Here we use the readelf program to extract the symbols from the binary. The symbols are in mangled form that is difficult to decipher for humans. So, we pipe it through a demangler.
Here is the output I got on my computer:
We can check which sections of virtual memory the class methods and virtual tables will be loaded into by examining the sections of the binary:
$ readelf --sections a.out
There are 37 section headers, starting at offset 0x6b78:
[Nr] Name Type Address Off Size ES Flg Lk Inf Al
 .text PROGBITS 00000000004007a0 0007a0 0002a2 00 AX 0 0 16
 .rodata PROGBITS 0000000000400a50 000a50 00008b 00 A 0 0 8
Key to Flags:
W (write), A (alloc), X (execute), M (merge), S (strings), l (large)
I (info), L (link order), G (group), T (TLS), E (exclude), x (unknown)
O (extra OS processing required) o (OS specific), p (processor specific)
We can cross-examine the addresses of the class methods and virtual tables with the starting addresses and sizes of the sections. We see that the class methods will be loaded into the .text section and the virtual tables into the .rodata segment. The flags of these sections indicate that only the .text section is executable, as it should be.
Finally, let us examine how the virtual tables are used at runtime to determine which method to execute. To do this, we disassemble the binary instructions in the binary:
From the output of objdump, only the disassembly of the main function is shown above. In the above command, we have requested objdump to --disassemble the binary code to assembly code, to --demangle the symbol names to human readable form and to annotate the disassembly with the original C++ --source statements.
By examining the disassembled code, the runtime mystery is revealed. We need to note that every object of a class, that has virtual methods, stores a pointer to its class virtual table. On a 64-bit computer, this means that objects of such classes need extra space of 8 bytes. This pointer is placed at the beginning of the memory layout of the object, even before other members of the object.
When you call a virtual method in C++ code, the compiler generates these instructions:
Jump to the beginning of the object. This is a location on the heap or stack, depending on how the object was created. This is where a pointer to its class virtual table is stored.
Jump to the start of the class virtual table. This is a location in the .rodata section of the process virtual memory, as we noted earlier.
Depending on which virtual method is needed, jump to that entry in the virtual table. This entry has the address of that virtual method.
Finally, jump to the address of the virtual method and start executing its instructions. This is in the .text section of the process virtual memory.
I had a Python script that used Caffe2. It worked fine on one computer. On another computer with same setup, it would fail at the import caffe2.python line with this error:
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
WARNING:root:Debug message: dlopen: cannot load any more object with static TLS
CRITICAL:root:Cannot load caffe2.python. Error: dlopen: cannot load any more object with static TLS
As I mentioned above, the GPU support warning is a red herring cause this Caffe2 Python was built with GPU support. The real error is the dlopen.
The only solution from Googling that gave a clue was this. As suggested there, I placed the import caffe2.python line at the top above all other imports. The error disappeared.
Caffe2 is under rapid deployment, so I find that the master branch may sometimes not compile. It is better to check the available release tags and checkout the latest release:
$ git tag
$ git co -b v_0_7_0 v0.7.0
The install guide suggests running make to build. Note that this in turn creates a build directory, runs CMake from there and later runs make in a subshell. The child make running from inside a Makefile will not get the MAKEFLAGS of the parent make. So, you cannot make in parallel by using --jobs or -j settings. And believe me building Caffe2 without parallel make takes extremely long! So, I prefer doing the CMake and make myself:
$ mkdir build
$ cd build
$ cmake ..
After Caffe2 is built, you need to install the Caffe2 headers, libraries and Python files to a different location. If you do not configure anything, CMake will try to install Caffe2 to /usr/local, which requires superuser privileges. I prefer installing Caffe2 to a local directory, say /home/joe/caffe2_deploy. To be able to to this:
$ cmake -DCMAKE_INSTALL_PREFIX:PATH=/home/joe/caffe2_deploy ..
$ make install
My Jabra Move Wireless Bluetooth headset connects without any problem with Kubuntu 16.04. When I try to play any video or audio in any player or even Youtube in a browser, their play button itself does not work! If I disconnect the Bluetooth headset, everything starts working correctly.
Looking up the error logs in /var/log/syslog shows this error:
[pulseaudio] bluez5-util.c: Transport TryAcquire() failed for transport /org/bluez/hci0/dev_00_18_09_24_DD_95/fd3 (Operation Not Authorized)
This only happens in the high-fidelity A2DP mode. If I switch to the terrible sounding lower fidelity mode, everything starts working again. But who would want to listen in low fidelity mode?
Turns out this is a well known bug that falls at the intersection of bluez (Bluetooth module) and PulseAudio as reported here. The only solution seems to be to download this script and running it whenever you see this problem. That is what I did and my headset is back to working again!
cuDNN provides primitives for deep learning networks that have been accelerated for GPUs by NVIDIA.
To download cuDNN head over to the cuDNN page here. cuDNN is not directly available for download. NVIDIA requires you to create a login. After that it presents cuDNN downloads in different formats (.tgz or .deb).
I prefer to install from .tgz since it gives more control. Unzip the file and it will create a cuda directory which has the required include and lib directories.
I like to rename this directory and keep it at /usr/local:
$ mv cuda cudnn
$ mv cudnn /usr/local
Remember to add the path to the cuDNN libraries to your LD_LIBRARY_PATH. For my case, that would be /usr/local/cudnn/lib64
For CMake in Caffe to automatically find cuDNN while building, export an environment variable named CUDNN_DIR pointing to the directory. For me, that directory would be /usr/local/cudnn
That is it! Caffe should be able to find and link with cuDNN now.