past.builtins ImportError

Problem

Calling a Caffe2 Python call gave this error:

$ python -c "from caffe2.python import core"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/caffe2/python/core.py", line 9, in <module>
    from past.builtins import basestring
ImportError: No module named past.builtins

Solution

This requires the future package. Installing that solved the problem:

$ pip install --user future

Tried with: Raspbian 9

Advertisements

serialized_options TypeError

Problem

I built and installed Caffe2. Running a simple Caffe2 Python call, gave this error:

$ python -c "from caffe2.python import core"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/caffe2/python/__init__.py", line 2, in <module>
    from caffe2.proto import caffe2_pb2
  File "/usr/local/lib/python2.7/dist-packages/caffe2/proto/__init__.py", line 11, in <module>
    from caffe2.proto import caffe2_pb2, metanet_pb2, torch_pb2
  File "/usr/local/lib/python2.7/dist-packages/caffe2/proto/caffe2_pb2.py", line 23, in <module>
    \x44\x65viceTypeProto\x12\r\n\tPROTO_CPU\x10\x00\x12\x0e\n\nPROTO_CUDA\x10\x01\x12\x10\n\x0cPROTO_MKLDNN\x10\x02\x12\x10\n\x0cPROTO_OPENGL\x10\x03\x12\x10\n\x0cPROTO_OPENCL\x10\x04\x12\x0f\n\x0bPROTO_IDEEP\x10\x05\x12\r\n\
tPROTO_HIP\x10\x06\x12\x0e\n\nPROTO_FPGA\x10\x07\x12\x0f\n\x0bPROTO_MSNPU\x10\x08\x12\r\n\tPROTO_XLA\x10\t\x12\'\n#PROTO_COMPILE_TIME_MAX_DEVICE_TYPES\x10\n\x12\x19\n\x13PROTO_ONLY_FOR_TEST\x10\xa5\xa3\x01')
TypeError: __new__() got an unexpected keyword argument 'serialized_options'

Solution

This is caused by an older version of Protobuf. Updating that solved the problem:

$ pip install -U protobuf

Tried with: Raspbian 9

Out of memory error on Raspbian

Problem

I was compiling some templated C++ on a Raspberry Pi when the compilation failed with this error:

cc1plus: out of memory allocating 66574076 bytes after a total of 98316160 bytes

Solution

The Raspberry Pi had 1GB of RAM. From the error it looked like the compiler needed more memory than that.

It turns out that Raspbian does not use a swap partition. Instead it uses a /var/swap file on the SD card. This is called dphys-swapfile. The default size of this swapfile is 100MB. As can be seen from the error above, this swap space was not enough.

Since I had more than 1GB of free space still available on my SD card, I decided to increase this swapfile. To do that, open the file /etc/dphys-swapfile and increase the number of MB set in the line CONF_SWAPSIZE=100.

Restart the swapfile using the command:

$ /etc/init.d/dphys-swapfile restart

Or alternately you could restart Raspbian.

Tried with: Raspbian 9 and Raspberry Pi B+ Rev3

VcXsrv X server for Windows

A X server is needed on Windows if you SSH to remote Linux computers and wish to start X or GUI applications from there. It is also necessary if you are working with a Linux distribution running inside Windows Subsystem for Linux (WSL). VcXsrv is a free open-source X server that can be used for all these purposes.

  • To install VcXsrv, download its installer from here and install it.
  • Launch XLaunch from the Windows start menu. This brings up a wizard to pick the X server configuration options. I choose Multiple Windows and go with the default options for the rest. Now the X server is running in the background.

  • Local: Go to your WSL shell (say Ubuntu) and set the DISPLAY environment variable:

$ export DISPLAY=localhost:0.0

Launch any X or GUI app and its window should now be displayed in its own individual Windows window.

  • Remote: Remember to SSH to the remote system with trusted X11 forwarding using option -Y. On the remote system, set the DISPLAY variable:
$ export DISPLAY=your-windows-ip:0.0

When you start XLaunch you may want to choose to Disable access control. Otherwise you may get errors like this:

$ xeyes
Authorization required, but no authorization protocol specified
Error: Can't open display: 10.0.0.99:0.0

Tried with: VcXsrv 1.20.1.4 and Ubuntu 18.04 WSL

Netron

Visualizing deep learning models has become a difficult task with the explosion of Deep Learning frameworks and formats used to store models. Every framework ships with its own visualization tool. For people working with multiple frameworks, this means learning and using different tools for the same task.

Netron aims to solve this problem by being a model visualization tool that supports pretty much every DL framework and model format. All the important ones that I care about are supported: Caffe prototxt, TensorFlow protobuf and ONNX.

  • Netron can be installed as a Python package from PyPI:
$ pip3 install --user netron
  • Using it is straightforward:
$ netron -b foobar.pb
$ netron -b foobar.prototxt
$ netron -b foobar.onnx

You can view and interact with a visualization of the graph in your browser at localhost:8080.

How to fetch Github pull request as local branch

Github pull requests are the mechanism by which contributors submit code for review and subsequent merging to a Github branch. It can sometimes be useful to grab a pull request as a branch in your local repository. This can be useful for example to diff with one of your local branches or to merge with one of your local branches.

To fetch a Github pull request, note down its number XYZ. Use this command to fetch it to a new local branch:

$ git fetch origin pull/XYZ/head:new_local_branchname

The pull request is now available locally as the branch new_local_branchname.

Reference: https://help.github.com/en/articles/checking-out-pull-requests-locally

How to visualize TensorFlow protobuf in Tensorboard

Tensorboard is a browser based visualization tool for the training of deep learning models using TensorFlow. It is typically invoked on the log directory that is output from the TensorFlow training process. It is not straightforward to use it to visualize a model stored as a single protobuf (.pb) file.

Here is how to do that:

  • Install TensorFlow and Tensorboard, if you do not have them already:
$ pip3 install -U --user tensorflow tensorboard
  • Convert the protobuf file to a file that Tensorboard can work with using an import script that ships with TensorFlow:
$ python3 ~/.local/lib/python3.6/site-packages/tensorflow/python/tools/import_pb_to_tensorboard.py --model_dir foobar_model.pb --log_dir foobar_log_dir

This script creates a log directory you requested for if it does not exist. It creates a file name of the form events.out.tfevents.1557253678.your-hostname that Tensorboard understands.
Note that it is better to pass in a different log directory for every different model.

Another thing to note is that the option is named --model_dir but it actually expects a protobuf file as input.

  • Now we can invoke Tensorboard with the log directory as input:
$ tensorboard --logdir=foobar_log_dir

The tensorboard executable file should be present in your ~/.local/bin directory. If this path is not in your PATH environment variable, consider adding it. Alternatively, you can invoke the executable with its absolute path too.

  • You can visualize and explore the structure of your model in Tensorboard by opening localhost:6006 in your browser.

nvprof in nvidia-docker permissions warning

Problem

I was running a CUDA application inside a nvidia-docker container. Wanting to profile it, I ran the application with nvprof and got a permissions warning and no profile information was generated:

==616== Warning: The user does not have permission to profile on the target device. See the following link for instructions to enable permissions and get more information: https://developer.nvidia.com/NVSOLN1000
==616== Warning: Some profiling data are not recorded. Make sure cudaProfilerStop() or cuProfilerStop() is called before application exit to flush profile data.

For another application, the error looked like this:

==643== NVPROF is profiling process 643, command: foobar
==643== Warning: The user does not have permission to profile on the target device. See the following link for instructions to enable permissions and get more information: https://developer.nvidia.com/NVSOLN1000
==643== Profiling application: foobar
==643== Profiling result:                                                                                                     
No kernels were profiled.                                                         
No API activities were profiled.

Solution

The warning message has a link, but perusing that documentation is not relevant to this docker problem. Solution turned out to be that I needed to add the --privileged option to my nvidia-docker command invocation.

How to install Raspbian 9

Raspbian 9 (Stretch) is the latest version of Debian for the Raspberry Pi.

Here is how I installed it:

  • Download the Raspbian Stretch Lite installation file from here.

  • We need a tool to write the OS image to a SD card. I used Etcher which can be installed from here.

  • Insert a SD card of at least 4GB capacity into your computer. Use Etcher and install the zip file to the SD card.

  • Eject the SD card. Remove it and plug it back into your computer. Create an empty file named ssh in the root directory of the SD card. This will enable you to SSH to your Raspberry Pi.

  • Insert this SD card into the Raspberry Pi board. Connect your Pi to your home wireless router with a Ethernet cable. You can also connect your Pi to your TV or computer display with a HDMI cable. Power on the Pi.

  • You can see Raspbian booting up on your TV or display. At the end it displays what IP address was assigned to it by DHCP. You can also figure out the IP address from the admin console of your wireless router. Let us say the IP address is 192.168.0.10.

  • SSH to the IP address of your Pi. The login is pi and the password is raspberry.

$ ssh pi@192.168.0.10
  • You are logged into the Pi now! Change the password using the passwd command.

  • Update the packages using these commands:

$ sudo apt update
$ sudo apt upgrade

Your Raspbian 9 is all set now for your use.