Skip to content
This repository has been archived by the owner on Apr 10, 2024. It is now read-only.

Importing Models into Lucid

Christopher Olah edited this page May 18, 2020 · 5 revisions

Lucid provides dozens of models for visualization without any setup in the Lucid modelzoo. But if you're looking at this page, you likely want to visualize your own model.

In order to visualize your model, Lucid needs to know a number of things about your model. Importing your model into Lucid creates a special file describing your model which contains all the necessary information. Once your model is imported, anyone can easily visualize it without additional information -- whether another researcher, an artist, or you five years from now when you've forgotten all the details.

Overview

As of 2019, the recommended way to import models for visualization is using Lucid's Model.save(). Your code will look something like this:

from lucid.modelzoo.vision_models import Model

with tf.Graph().as_default() as graph, tf.Session() as sess:
    images = tf.placeholder("float32", [None, 224, 224, 3], name="input")

    # <Code to construct & load your model inference graph goes here>

    Model.save( ... )

You can now import your model into Lucid for visualization!

model = Model.load("saved_model.pb")

render.render_vis(model, "layer_name:0")

Detailed Description

Note: This tutorial assumes you know how to make an inference graph of your model, like you use when you run your model over your test set.

As long as you know how to construct and load your inference graph, the main challenge to importing your model will be determining the correct metadata to give lucid.

Lucid needs four pieces of metadata:

  • image_shape - For some visualizations, lucid needs to know what the shape of your input image is.
  • input_name - Feature visualization works by optimizing the input node, so lucid needs to know what the input is. (If you construct and use a Placeholder as above, this will be 'input'.)
  • output_names - Lucid needs to know what the outputs of your model are to properly save it.
  • image_value_range - Lucid needs to know if your model expects input values (pixel intensities) to be between 0 and 255, or 0 and 1, or something else. Unfortunately, this varies widely between models so you need to tell lucid.

Sometimes Lucid can infer some of the metadata for you, if you don't know it offhand, using Model.suggest_save_args(). Here's an example of the typical output.

from lucid.modelzoo.vision_models import Model

with tf.Graph().as_default() as graph, tf.Session() as sess:
    images = tf.placeholder("float32", [None, 224, 224, 3], name="input")

    # <Code to construct & load your model inference graph goes here>

    Model.suggest_save_args()
Inferred: input_name = input (because it was the only Placeholder in the graph_def)
Inferred: image_shape = [224, 224, 3]
Inferred: output_names = ['Softmax']  (because those are all the Softmax ops)
# Please sanity check all inferred values before using this code.
Incorrect `image_value_range` is the most common cause of feature visualization bugs! Most methods will fail silently with incorrect visualizations!
Model.save(
    input_name='input',
    image_shape=[224, 224, 3],
    output_names=['Softmax'],
    image_value_range=_,                   # TODO (eg. '[-1, 1], [0, 1], [0, 255], or [-117, 138]')
  )

However, you'll likely need to fill in some information yourself.

Note: Getting image_value_range wrong is the single most common cause of feature visualization bugs, and is really annoying to catch later on. All the code will run, but your results will be off because the model is getting values that don't make sense to it. We recommend double checking you got image_value_range correct.

Your final code should look something like this:

from lucid.modelzoo.vision_models import Model

with tf.Graph().as_default() as graph, tf.Session() as sess:
    images = tf.placeholder("float32", [None, 224, 224, 3], name="input")

    # <Code to construct & load your model inference graph goes here>
    # ...

    Model.save(
      "saved_model.pb",
      image_shape=[W, W, 3],
      input_name='input',
      output_names=['Softmax'],
      image_value_range=[0,1],
    )

Keras Specific Advice

Keras doesn't register it's session as default. As such, you'll want to do something like this:

with K.get_session().as_default():
    ...

Exporting Directly to Google Cloud Storage

Model.save() can save directly to a Google Cloud bucket if your computer is properly authenticated with Google Cloud. This can be useful for permanently archiving your model for future visualization and analysis, making it accessible across servers, and sharing with others.

Model.save("gs://bucket-name/saved_model_path/model_name.pb", ...)

Debugging Common Issues

I can run feature visualization, but my model features don't seem right

Debugging check list:

  • Are you sure you set image_value_range correctly?
  • Is it possible you exported your training graph instead of your inference graph?
    • In particular, if your model uses batch norm, is it possible batch norm is in training mode (batch norm parameters change in response to input)?
  • Is it possible your model was trained with extreme (unrealistic) data augmentation? In particular, extreme hue rotation is quite common and both hurts model performance and makes feature visualization weird because the model is trying to be invariant to hue.
  • Are you visualizing a pooling layer? We recommend visualizing conv layers to start.
  • Are you visualizing a residual model after addition back into the residual stream? We recommend visualizing before addition.

For more detailed help with these problems, see Failure Modes

model.layers is empty

This is expected. The layer list only exists in manually defined classes, like those in modelzoo. Model.load() will always return an empty layer list.

This is because model.layers is a human defined list of layers of interest. There are a few reasons for this:

  1. For most purposes, you do not need layers defined. We didn't wish to make exporting models much more complicated for something only needed in some cases.
  2. Automatically crating a list of layers would require us to heuristically determine which nodes in the graphs are "layers." We now have pretty good heuristics for guessing which layers are of interest, but we didn't when we made this originally. Even these improved heuristics break for unusual models.
  3. The most important aspect of having layers is having layer.activations, an array of how the layer responds to a fixed set of ImageNet classes. Creating these requires access to ImageNet, and would make importing models much slower and more difficult.

If you want to create a manually defined Model class, look at examples in modelzoo.