serialized_options TypeError

Problem

I built and installed Caffe2. Running a simple Caffe2 Python call, gave this error:

$ python -c "from caffe2.python import core"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/caffe2/python/__init__.py", line 2, in <module>
    from caffe2.proto import caffe2_pb2
  File "/usr/local/lib/python2.7/dist-packages/caffe2/proto/__init__.py", line 11, in <module>
    from caffe2.proto import caffe2_pb2, metanet_pb2, torch_pb2
  File "/usr/local/lib/python2.7/dist-packages/caffe2/proto/caffe2_pb2.py", line 23, in <module>
    \x44\x65viceTypeProto\x12\r\n\tPROTO_CPU\x10\x00\x12\x0e\n\nPROTO_CUDA\x10\x01\x12\x10\n\x0cPROTO_MKLDNN\x10\x02\x12\x10\n\x0cPROTO_OPENGL\x10\x03\x12\x10\n\x0cPROTO_OPENCL\x10\x04\x12\x0f\n\x0bPROTO_IDEEP\x10\x05\x12\r\n\
tPROTO_HIP\x10\x06\x12\x0e\n\nPROTO_FPGA\x10\x07\x12\x0f\n\x0bPROTO_MSNPU\x10\x08\x12\r\n\tPROTO_XLA\x10\t\x12\'\n#PROTO_COMPILE_TIME_MAX_DEVICE_TYPES\x10\n\x12\x19\n\x13PROTO_ONLY_FOR_TEST\x10\xa5\xa3\x01')
TypeError: __new__() got an unexpected keyword argument 'serialized_options'

Solution

This is caused by an older version of Protobuf. Updating that solved the problem:

$ pip install -U protobuf

Tried with: Raspbian 9

Advertisements

How to visualize TensorFlow protobuf in Tensorboard

Tensorboard is a browser based visualization tool for the training of deep learning models using TensorFlow. It is typically invoked on the log directory that is output from the TensorFlow training process. It is not straightforward to use it to visualize a model stored as a single protobuf (.pb) file.

Here is how to do that:

  • Install TensorFlow and Tensorboard, if you do not have them already:
$ pip3 install -U --user tensorflow tensorboard
  • Convert the protobuf file to a file that Tensorboard can work with using an import script that ships with TensorFlow:
$ python3 ~/.local/lib/python3.6/site-packages/tensorflow/python/tools/import_pb_to_tensorboard.py --model_dir foobar_model.pb --log_dir foobar_log_dir

This script creates a log directory you requested for if it does not exist. It creates a file name of the form events.out.tfevents.1557253678.your-hostname that Tensorboard understands.
Note that it is better to pass in a different log directory for every different model.

Another thing to note is that the option is named --model_dir but it actually expects a protobuf file as input.

  • Now we can invoke Tensorboard with the log directory as input:
$ tensorboard --logdir=foobar_log_dir

The tensorboard executable file should be present in your ~/.local/bin directory. If this path is not in your PATH environment variable, consider adding it. Alternatively, you can invoke the executable with its absolute path too.

  • You can visualize and explore the structure of your model in Tensorboard by opening localhost:6006 in your browser.

The strange case of varying floats in Protobuf

Problem

I was using Google Protobuf in a Python program to read some text format Protobuf messages, merge them and write them out. Surprisingly, for the same set of input text format message files, I was getting different outputs on two computers! The values that were different were float values. The float values were generally correct, but varied slightly in precision between the two computers.

Solution

This strange observation took quite a long investigation. I initially assumed that maybe the Protobuf library (libprotobuf.so) or the Python Protobuf package were of different versions on these two computers. Surprisingly, they were exactly the same.

The mystery finally turned out to be the Protobuf implementation type. There are currently two possible types: cpp and python. By default, the cpp implementation is used. However, on one of the computers, the python implementation had been chosen by an engineer during the PIP package installation. The way to pick the engine is by setting an environment variable named PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION to either cpp or python. The engineer had set this environment variable in his shell when playing around with Protobuf and had later installed the PIP package.

Once I explicitly set the PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION environment variable manually in my Python code before importing protobuf, the float values were the same on both computers!

Now why should the engine affect the float value? Because Python’s float is actually double precision. On the other hand, when a 32-bit float moved between Python code and the C++ engine and back to Python code, it was sometimes changing precision. By using the same engine on all computers, we ensured that at least the float values did not vary between machines.

Tried with: Python Protobuf 3.3.0 and Ubuntu 14.04

How to visualize Caffe Net using GraphViz

The network architecture of Convolutional Neural Networks (CNN) can be heavily layered and complex. Viewing the network visually is a great way to get a sense of its architecture. Since the network is a graph, it is easy to visualize this using GraphViz.

Caffe requires its Net to be in the Google ProtoBuf format. It also provides a draw_net.py script that can be used to output the graph to all the formats supported by GraphViz.

  • From the Caffe root directory, you can export a .prototxt model file as a graph to a PNG image file:
$ python/draw_net.py foo.prototxt foo.png

Possible output formats include PNG, PDF, DOT and others supported by GraphViz.

  • By default, the net layers are drawn from left-to-right. I prefer to visualize a CNN in top-to-bottom fashion:
$ python/draw_net.py --rankdir TB foo.prototxt foo.png
  • I prefer to interact with the graph visualization, which is a bit difficult with an image file. So, I prefer to export to DOT format file and play with it using XDot:
$ python/draw_net.py foo.prototxt foo.dot
$ xdot foo.dot

Tried with: Ubuntu 14.04