Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorflow inception V3 too many values to unpack (expected 2) #863

Open
DavidGOrtega opened this issue Jun 3, 2018 · 4 comments
Open

Comments

@DavidGOrtega
Copy link

Hi,

Im trying to convert a Tensorflow pb file for testing purposes.
The model is inception_v3_2016_08_28_frozen.pb that can be found here:

https://www.tensorflow.org/tutorials/image_recognition

import tensorflow as tf
from webdnn.frontend.tensorflow import TensorFlowConverter
from webdnn.backend import generate_descriptor


model_path  = 'inception_v3_2016_08_28_frozen.pb'
out_path    = './../models/inception_v3_2016_08_28_frozen'

with tf.gfile.GFile(model_path, "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

with tf.Graph().as_default() as graph:
    tf.import_graph_def(graph_def,
      input_map=None,
      return_elements=None,
      name="")

tfsession   = tf.Session(graph=graph)

in_1        = tfsession.graph.get_tensor_by_name("input:0")
out_1       = tfsession.graph.get_tensor_by_name("InceptionV3/Predictions/Softmax:0")

print( [in_1, out_1])

graph       = TensorFlowConverter(tfsession).convert([in_1], [out_1])

exec_info   = generate_descriptor("webgpu", graph)  # also "webassembly", "webgl", "fallback" are available.
exec_info.save(out_path)
/usr/local/lib/python3.6/site-packages/webdnn-1.2.5-py3.6.egg/webdnn/util/console.py:30: Warning: [KerasConverter] keras.layers.AveragePooling computes average by dividing number of valid elements in window (without padding element), but WebDNN divides it by the number of elements including padding element, so different result will be generated on the edge.
  warnings.warn(message, category)
Traceback (most recent call last):
  File "tftest.py", line 51, in <module>
    convert_graph()
  File "tftest.py", line 41, in convert_graph
    graph       = TensorFlowConverter(tfsession).convert([in_1], [out_1])
  File "/usr/local/lib/python3.6/site-packages/webdnn-1.2.5-py3.6.egg/webdnn/frontend/tensorflow/converter.py", line 96, in convert
    self._convert_operator(op)
  File "/usr/local/lib/python3.6/site-packages/webdnn-1.2.5-py3.6.egg/webdnn/frontend/converter.py", line 117, in _convert_operator
    self._handler_map[self.__class__.__name__][operator_key](self, operator)
  File "/usr/local/lib/python3.6/site-packages/webdnn-1.2.5-py3.6.egg/webdnn/frontend/tensorflow/ops/gen_array_ops.py", line 83, in concat_v2_handler
    for x0, x1 in itertools.permutations(xs):
ValueError: too many values to unpack (expected 2)

@milhidaka
Copy link
Member

Thanks for reporting. I implemented a patch and your script runs without error.
https://github.com/mil-tokyo/webdnn/tree/issue863
Please try it (I cannot confirm the converted works well).

@DavidGOrtega
Copy link
Author

Thanks! I will tell you

@DavidGOrtega
Copy link
Author

Works! However the returned type seems to be Int32Array while I was expecting Float32Array, It's that right? Is there any option to enforce it?

Also, there are some backends that can not be generated, like webgpu, and assembly but I think you already know that.

@milhidaka
Copy link
Member

In my environment, graph descriptor for all backend can be generated.
For webassembly, emscripten have to be set up.
https://mil-tokyo.github.io/webdnn/docs/tutorial/setup.html#installing-emscripten-and-eigen

Where did you get Int32Array? Arrays are Float32 in default, unless explicitly specified.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants