How to install VeraCrypt on Raspbian

Updated post here:

CPU feature Caffe2 warning


Running a Caffe2 C++ application produces these messages:

$ ./foobar
E0514 20:37:31.503541 26925] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0514 20:37:31.504768 26925] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0514 20:37:31.504787 26925] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.


Caffe2 is informing that the Intel CPU being used has AVX, AVX2 and FMA acceleration features. However, Caffe2 has not been compiled with support for these features. Compiling Caffe2 with support to use these features would speedup training and inference using Caffe2.

To enable use of these features when compiling Caffe2, enable the USE_NATIVE_ARCH option like this:

$ cmake -DUSE_NATIVE_ARCH ..

past.builtins ImportError


Calling a Caffe2 Python call gave this error:

$ python -c "from caffe2.python import core"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/caffe2/python/", line 9, in <module>
    from past.builtins import basestring
ImportError: No module named past.builtins


This requires the future package. Installing that solved the problem:

$ pip install --user future

Tried with: Raspbian 9

serialized_options TypeError


I built and installed Caffe2. Running a simple Caffe2 Python call, gave this error:

$ python -c "from caffe2.python import core"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/caffe2/python/", line 2, in <module>
    from caffe2.proto import caffe2_pb2
  File "/usr/local/lib/python2.7/dist-packages/caffe2/proto/", line 11, in <module>
    from caffe2.proto import caffe2_pb2, metanet_pb2, torch_pb2
  File "/usr/local/lib/python2.7/dist-packages/caffe2/proto/", line 23, in <module>
TypeError: __new__() got an unexpected keyword argument 'serialized_options'


This is caused by an older version of Protobuf. Updating that solved the problem:

$ pip install -U protobuf

Tried with: Raspbian 9

Out of memory error on Raspbian


I was compiling some templated C++ on a Raspberry Pi when the compilation failed with this error:

cc1plus: out of memory allocating 66574076 bytes after a total of 98316160 bytes


The Raspberry Pi had 1GB of RAM. From the error it looked like the compiler needed more memory than that.

It turns out that Raspbian does not use a swap partition. Instead it uses a /var/swap file on the SD card. This is called dphys-swapfile. The default size of this swapfile is 100MB. As can be seen from the error above, this swap space was not enough.

Since I had more than 1GB of free space still available on my SD card, I decided to increase this swapfile. To do that, open the file /etc/dphys-swapfile and increase the number of MB set in the line CONF_SWAPSIZE=100.

Restart the swapfile using the command:

$ /etc/init.d/dphys-swapfile restart

Or alternately you could restart Raspbian.

Tried with: Raspbian 9 and Raspberry Pi B+ Rev3


Visualizing deep learning models has become a difficult task with the explosion of Deep Learning frameworks and formats used to store models. Every framework ships with its own visualization tool. For people working with multiple frameworks, this means learning and using different tools for the same task.

Netron aims to solve this problem by being a model visualization tool that supports pretty much every DL framework and model format. All the important ones that I care about are supported: Caffe prototxt, TensorFlow protobuf and ONNX.

  • Netron can be installed as a Python package from PyPI:
$ pip3 install --user netron
  • Using it is straightforward:
$ netron -b foobar.pb
$ netron -b foobar.prototxt
$ netron -b foobar.onnx

You can view and interact with a visualization of the graph in your browser at localhost:8080.

How to fetch Github pull request as local branch

Github pull requests are the mechanism by which contributors submit code for review and subsequent merging to a Github branch. It can sometimes be useful to grab a pull request as a branch in your local repository. This can be useful for example to diff with one of your local branches or to merge with one of your local branches.

To fetch a Github pull request, note down its number XYZ. Use this command to fetch it to a new local branch:

$ git fetch origin pull/XYZ/head:new_local_branchname

The pull request is now available locally as the branch new_local_branchname.


How to visualize TensorFlow protobuf in Tensorboard

Tensorboard is a browser based visualization tool for the training of deep learning models using TensorFlow. It is typically invoked on the log directory that is output from the TensorFlow training process. It is not straightforward to use it to visualize a model stored as a single protobuf (.pb) file.

Here is how to do that:

  • Install TensorFlow and Tensorboard, if you do not have them already:
$ pip3 install -U --user tensorflow tensorboard
  • Convert the protobuf file to a file that Tensorboard can work with using an import script that ships with TensorFlow:
$ python3 ~/.local/lib/python3.6/site-packages/tensorflow/python/tools/ --model_dir foobar_model.pb --log_dir foobar_log_dir

This script creates a log directory you requested for if it does not exist. It creates a file name of the form events.out.tfevents.1557253678.your-hostname that Tensorboard understands.
Note that it is better to pass in a different log directory for every different model.

Another thing to note is that the option is named --model_dir but it actually expects a protobuf file as input.

  • Now we can invoke Tensorboard with the log directory as input:
$ tensorboard --logdir=foobar_log_dir

The tensorboard executable file should be present in your ~/.local/bin directory. If this path is not in your PATH environment variable, consider adding it. Alternatively, you can invoke the executable with its absolute path too.

  • You can visualize and explore the structure of your model in Tensorboard by opening localhost:6006 in your browser.