How to install VeraCrypt on Raspbian

  • Install libraries required by VeraCrypt:
$ sudo apt install libfuse-dev libwxbase3.0-dev
  • Download the latest version of Veracrypt that has a Raspbian package from here. The latest version for Raspbian I could find was v1.21. It can be downloaded using this command:
$ wget -L -O veracrypt-1.21-raspbian-setup.tar.bz2 https://sourceforge.net/projects/veracrypt/files/VeraCrypt%201.21/veracrypt-1.21-raspbian-setup.tar.bz2/download
  • Uncompress the file:
$ tar xvf veracrypt-1.21-raspbian-setup.tar.bz2
  • Make the installer as executable and run it:
$ chmod +x veracrypt-1.21-setup-console-armv7
$ sudo ./veracrypt-1.21-setup-console-armv7
  • That is it! Try it out:
$ veracrypt --version
VeraCrypt 1.21
  • In case you need to uninstall it in the future, this is how to do it:
$ sudo /usr/bin/veracrypt-uninstall.sh

Tried with: Raspbian 9

Advertisements

CPU feature Caffe2 warning

Problem

Running a Caffe2 C++ application produces these messages:

$ ./foobar
E0514 20:37:31.503541 26925 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0514 20:37:31.504768 26925 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0514 20:37:31.504787 26925 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.

Solution

Caffe2 is informing that the Intel CPU being used has AVX, AVX2 and FMA acceleration features. However, Caffe2 has not been compiled with support for these features. Compiling Caffe2 with support to use these features would speedup training and inference using Caffe2.

To enable use of these features when compiling Caffe2, enable the USE_NATIVE_ARCH option like this:

$ cmake -DUSE_NATIVE_ARCH ..

How to install Intel MKL

Intel Math Kernel Library (MKL) provides math routines optimized for Intel CPUs. It is available free for personal and community use.

  • Register and download MKL from here. I prefer to choose the full package.
  • For Linux, the downloaded file is a zipped tar file. For example: l_mkl_2019.3.199.tgz. Unzip its contents.
  • Run its installer script:
$ sudo ./install.sh

The installer takes you through a console install wizard. By default, it will install to /opt/intel. By default, it will install MKL libraries for C/C++ and Fortran. I typically request not to install the Fortran libraries which reduces the install size by half.

Tried with: Ubuntu 18.04

past.builtins ImportError

Problem

Calling a Caffe2 Python call gave this error:

$ python -c "from caffe2.python import core"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/caffe2/python/core.py", line 9, in <module>
    from past.builtins import basestring
ImportError: No module named past.builtins

Solution

This requires the future package. Installing that solved the problem:

$ pip install --user future

Tried with: Raspbian 9

serialized_options TypeError

Problem

I built and installed Caffe2. Running a simple Caffe2 Python call, gave this error:

$ python -c "from caffe2.python import core"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/caffe2/python/__init__.py", line 2, in <module>
    from caffe2.proto import caffe2_pb2
  File "/usr/local/lib/python2.7/dist-packages/caffe2/proto/__init__.py", line 11, in <module>
    from caffe2.proto import caffe2_pb2, metanet_pb2, torch_pb2
  File "/usr/local/lib/python2.7/dist-packages/caffe2/proto/caffe2_pb2.py", line 23, in <module>
    \x44\x65viceTypeProto\x12\r\n\tPROTO_CPU\x10\x00\x12\x0e\n\nPROTO_CUDA\x10\x01\x12\x10\n\x0cPROTO_MKLDNN\x10\x02\x12\x10\n\x0cPROTO_OPENGL\x10\x03\x12\x10\n\x0cPROTO_OPENCL\x10\x04\x12\x0f\n\x0bPROTO_IDEEP\x10\x05\x12\r\n\
tPROTO_HIP\x10\x06\x12\x0e\n\nPROTO_FPGA\x10\x07\x12\x0f\n\x0bPROTO_MSNPU\x10\x08\x12\r\n\tPROTO_XLA\x10\t\x12\'\n#PROTO_COMPILE_TIME_MAX_DEVICE_TYPES\x10\n\x12\x19\n\x13PROTO_ONLY_FOR_TEST\x10\xa5\xa3\x01')
TypeError: __new__() got an unexpected keyword argument 'serialized_options'

Solution

This is caused by an older version of Protobuf. Updating that solved the problem:

$ pip install -U protobuf

Tried with: Raspbian 9

Out of memory error on Raspbian

Problem

I was compiling some templated C++ on a Raspberry Pi when the compilation failed with this error:

cc1plus: out of memory allocating 66574076 bytes after a total of 98316160 bytes

Solution

The Raspberry Pi had 1GB of RAM. From the error it looked like the compiler needed more memory than that.

It turns out that Raspbian does not use a swap partition. Instead it uses a /var/swap file on the SD card. This is called dphys-swapfile. The default size of this swapfile is 100MB. As can be seen from the error above, this swap space was not enough.

Since I had more than 1GB of free space still available on my SD card, I decided to increase this swapfile. To do that, open the file /etc/dphys-swapfile and increase the number of MB set in the line CONF_SWAPSIZE=100.

Restart the swapfile using the command:

$ /etc/init.d/dphys-swapfile restart

Or alternately you could restart Raspbian.

Tried with: Raspbian 9 and Raspberry Pi B+ Rev3

VcXsrv X server for Windows

A X server is needed on Windows if you SSH to remote Linux computers and wish to start X or GUI applications from there. It is also necessary if you are working with a Linux distribution running inside Windows Subsystem for Linux (WSL). VcXsrv is a free open-source X server that can be used for all these purposes.

  • To install VcXsrv, download its installer from here and install it.
  • Launch XLaunch from the Windows start menu. This brings up a wizard to pick the X server configuration options. I choose Multiple Windows and go with the default options for the rest. Now the X server is running in the background.

  • Local: Go to your WSL shell (say Ubuntu) and set the DISPLAY environment variable:

$ export DISPLAY=localhost:0.0

Launch any X or GUI app and its window should now be displayed in its own individual Windows window.

  • Remote: Remember to SSH to the remote system with trusted X11 forwarding using option -Y. On the remote system, set the DISPLAY variable:
$ export DISPLAY=your-windows-ip:0.0

When you start XLaunch you may want to choose to Disable access control. Otherwise you may get errors like this:

$ xeyes
Authorization required, but no authorization protocol specified
Error: Can't open display: 10.0.0.99:0.0

Tried with: VcXsrv 1.20.1.4 and Ubuntu 18.04 WSL

Netron

Visualizing deep learning models has become a difficult task with the explosion of Deep Learning frameworks and formats used to store models. Every framework ships with its own visualization tool. For people working with multiple frameworks, this means learning and using different tools for the same task.

Netron aims to solve this problem by being a model visualization tool that supports pretty much every DL framework and model format. All the important ones that I care about are supported: Caffe prototxt, TensorFlow protobuf and ONNX.

  • Netron can be installed as a Python package from PyPI:
$ pip3 install --user netron
  • Using it is straightforward:
$ netron -b foobar.pb
$ netron -b foobar.prototxt
$ netron -b foobar.onnx

You can view and interact with a visualization of the graph in your browser at localhost:8080.

How to fetch Github pull request as local branch

Github pull requests are the mechanism by which contributors submit code for review and subsequent merging to a Github branch. It can sometimes be useful to grab a pull request as a branch in your local repository. This can be useful for example to diff with one of your local branches or to merge with one of your local branches.

To fetch a Github pull request, note down its number XYZ. Use this command to fetch it to a new local branch:

$ git fetch origin pull/XYZ/head:new_local_branchname

The pull request is now available locally as the branch new_local_branchname.

Reference: https://help.github.com/en/articles/checking-out-pull-requests-locally

How to visualize TensorFlow protobuf in Tensorboard

Tensorboard is a browser based visualization tool for the training of deep learning models using TensorFlow. It is typically invoked on the log directory that is output from the TensorFlow training process. It is not straightforward to use it to visualize a model stored as a single protobuf (.pb) file.

Here is how to do that:

  • Install TensorFlow and Tensorboard, if you do not have them already:
$ pip3 install -U --user tensorflow tensorboard
  • Convert the protobuf file to a file that Tensorboard can work with using an import script that ships with TensorFlow:
$ python3 ~/.local/lib/python3.6/site-packages/tensorflow/python/tools/import_pb_to_tensorboard.py --model_dir foobar_model.pb --log_dir foobar_log_dir

This script creates a log directory you requested for if it does not exist. It creates a file name of the form events.out.tfevents.1557253678.your-hostname that Tensorboard understands.
Note that it is better to pass in a different log directory for every different model.

Another thing to note is that the option is named --model_dir but it actually expects a protobuf file as input.

  • Now we can invoke Tensorboard with the log directory as input:
$ tensorboard --logdir=foobar_log_dir

The tensorboard executable file should be present in your ~/.local/bin directory. If this path is not in your PATH environment variable, consider adding it. Alternatively, you can invoke the executable with its absolute path too.

  • You can visualize and explore the structure of your model in Tensorboard by opening localhost:6006 in your browser.