Skip to content

Latest commit

 

History

History

bidirectional_attention_flow

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

BiDAF

Description

This model is a neural network for answering a query about a given context paragraph.

Model

Model Download Download (with sample test data) ONNX version Opset version Accuracy
BiDAF 41.5 MB 37.3 MB 1.4 9 EM of 68.1 in SQuAD v1.1
BiDAF-int8 12 MB 8.7 MB 1.13.1 11 EM of 65.93 in SQuAD v1.1

Compared with the fp32 BiDAF, int8 BiDAF accuracy drop ratio is 0.23% and performance improvement is 0.89x in SQuAD v1.1.

The performance depends on the test hardware. Performance data here is collected with Intel® Xeon® Platinum 8280 Processor, 1s 4c per instance, CentOS Linux 8.3, data batch size is 1.


Inference

Input to model

Tokenized strings of context paragraph and query.

Preprocessing steps

Tokenize words and chars in string for context and query. The tokenized words are in lower case, while chars are not. Chars of each word needs to be clamped or padded to list of length 16. Note NLTK is used in preprocess for word tokenize.

  • context_word: [seq, 1,] of string
  • context_char: [seq, 1, 1, 16] of string
  • query_word: [seq, 1,] of string
  • query_char: [seq, 1, 1, 16] of string

The following code shows how to preprocess input strings:

import numpy as np
import string
from nltk import word_tokenize

def preprocess(text):
   tokens = word_tokenize(text)
   # split into lower-case word tokens, in numpy array with shape of (seq, 1)
   words = np.asarray([w.lower() for w in tokens]).reshape(-1, 1)
   # split words into chars, in numpy array with shape of (seq, 1, 1, 16)
   chars = [[c for c in t][:16] for t in tokens]
   chars = [cs+['']*(16-len(cs)) for cs in chars]
   chars = np.asarray(chars).reshape(-1, 1, 1, 16)
   return words, chars

# input
context = 'A quick brown fox jumps over the lazy dog.'
query = 'What color is the fox?'
cw, cc = preprocess(context)
qw, qc = preprocess(query)

Output of model

The model has 2 outputs.

  • start_pos: the answer's start position (0-indexed) in context,
  • end_pos: the answer's inclusive end position (0-indexed) in context.

Postprocessing steps

Post processing and meaning of output

# assuming answer contains the np arrays for start_pos/end_pos
start = np.asscalar(answer[0])
end = np.asscalar(answer[1])
print([w.encode() for w in cw[start:end+1].reshape(-1)])

For this testcase, it would output

[b'brown'].

Dataset (Train and validation)

The model is trained with SQuAD v1.1.


Validation accuracy

Metric is Exact Matching (EM) of 68.1, computed over SQuAD v1.1 dev data.


Quantization

BiDAF-int8 is obtained by quantizing fp32 BiDAF model. We use Intel® Neural Compressor with onnxruntime backend to perform quantization. View the instructions to understand how to use Intel® Neural Compressor for quantization.

Prepare Model

Download model from ONNX Model Zoo.

wget https://github.com/onnx/models/raw/main/text/machine_comprehension/bidirectional_attention_flow/model/bidaf-9.onnx

Convert opset version to 11 for more quantization capability.

import onnx
from onnx import version_converter

model = onnx.load('bidaf-9.onnx')
model = version_converter.convert_version(model, 11)
onnx.save_model(model, 'bidaf-11.onnx')

Model quantize

Dynamic quantization:

bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
                   --dataset_location=path/to/squad/dev-v1.1.json
                   --output_model=path/to/model_tune

Publication/Attribution

Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi. Bidirectional Attention Flow for Machine Comprehension, paper


References


Contributors


License

MIT License