I built and installed Caffe2. Running a simple Caffe2 Python call, gave this error:
$ python -c "from caffe2.python import core"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/__init__.py", line 2, in <module>
from caffe2.proto import caffe2_pb2
File "/usr/local/lib/python2.7/dist-packages/caffe2/proto/__init__.py", line 11, in <module>
from caffe2.proto import caffe2_pb2, metanet_pb2, torch_pb2
File "/usr/local/lib/python2.7/dist-packages/caffe2/proto/caffe2_pb2.py", line 23, in <module>
TypeError: __new__() got an unexpected keyword argument 'serialized_options'
This is caused by an older version of Protobuf. Updating that solved the problem:
$ pip install -U protobuf
Tried with: Raspbian 9
Tensorboard is a browser based visualization tool for the training of deep learning models using TensorFlow. It is typically invoked on the log directory that is output from the TensorFlow training process. It is not straightforward to use it to visualize a model stored as a single protobuf (.pb) file.
Here is how to do that:
- Install TensorFlow and Tensorboard, if you do not have them already:
$ pip3 install -U --user tensorflow tensorboard
- Convert the protobuf file to a file that Tensorboard can work with using an import script that ships with TensorFlow:
$ python3 ~/.local/lib/python3.6/site-packages/tensorflow/python/tools/import_pb_to_tensorboard.py --model_dir foobar_model.pb --log_dir foobar_log_dir
This script creates a log directory you requested for if it does not exist. It creates a file name of the form
events.out.tfevents.1557253678.your-hostname that Tensorboard understands.
Note that it is better to pass in a different log directory for every different model.
Another thing to note is that the option is named
--model_dir but it actually expects a protobuf file as input.
- Now we can invoke Tensorboard with the log directory as input:
$ tensorboard --logdir=foobar_log_dir
tensorboard executable file should be present in your
~/.local/bin directory. If this path is not in your
PATH environment variable, consider adding it. Alternatively, you can invoke the executable with its absolute path too.
- You can visualize and explore the structure of your model in Tensorboard by opening localhost:6006 in your browser.
I was using Google Protobuf in a Python program to read some text format Protobuf messages, merge them and write them out. Surprisingly, for the same set of input text format message files, I was getting different outputs on two computers! The values that were different were float values. The float values were generally correct, but varied slightly in precision between the two computers.
This strange observation took quite a long investigation. I initially assumed that maybe the Protobuf library (
libprotobuf.so) or the Python Protobuf package were of different versions on these two computers. Surprisingly, they were exactly the same.
The mystery finally turned out to be the Protobuf implementation type. There are currently two possible types: cpp and python. By default, the cpp implementation is used. However, on one of the computers, the python implementation had been chosen by an engineer during the PIP package installation. The way to pick the engine is by setting an environment variable named
PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION to either
python. The engineer had set this environment variable in his shell when playing around with Protobuf and had later installed the PIP package.
Once I explicitly set the
PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION environment variable manually in my Python code before importing protobuf, the float values were the same on both computers!
Now why should the engine affect the float value? Because Python’s float is actually double precision. On the other hand, when a 32-bit float moved between Python code and the C++ engine and back to Python code, it was sometimes changing precision. By using the same engine on all computers, we ensured that at least the float values did not vary between machines.
Tried with: Python Protobuf 3.3.0 and Ubuntu 14.04
The network architecture of Convolutional Neural Networks (CNN) can be heavily layered and complex. Viewing the network visually is a great way to get a sense of its architecture. Since the network is a graph, it is easy to visualize this using GraphViz.
Caffe requires its Net to be in the Google ProtoBuf format. It also provides a
draw_net.py script that can be used to output the graph to all the formats supported by GraphViz.
- From the Caffe root directory, you can export a
.prototxt model file as a graph to a PNG image file:
$ python/draw_net.py foo.prototxt foo.png
Possible output formats include PNG, PDF, DOT and others supported by GraphViz.
- By default, the net layers are drawn from left-to-right. I prefer to visualize a CNN in top-to-bottom fashion:
$ python/draw_net.py --rankdir TB foo.prototxt foo.png
- I prefer to interact with the graph visualization, which is a bit difficult with an image file. So, I prefer to export to DOT format file and play with it using XDot:
$ python/draw_net.py foo.prototxt foo.dot
$ xdot foo.dot
Tried with: Ubuntu 14.04