-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Misclassification vs. TensorFlow #170
Comments
@mbartling @janjongboom |
Still, thanks for your demo code. |
BTW, I'll implement the C++ api of TF softmax: ,which works on only 2-D tensor. If you look at the python wrapper for softmax in TF: I think it's too much of work support multi dim tensor softmax op in uTensor runtime. |
@dboyliao What do you mean? I'm not using any explicit softmax as far as I know. Just simple MLP 40x20x10x4. |
I see the |
@janjongboom |
|
@dboyliao @mbartling Does landing #171 mean this issue should be fixed? |
Further test is required to confirm this. But, yes, according to @dboyliao 's comment on your sample training script. |
I'm pretty sure the original softmax in uTensor is incorrect. |
@dboyliao @neil-tan I've tested this against the {
ctx.add(new RamTensor<float>(), "y_pred/Softmax:0");
ctx.push(new SoftmaxOp<float, float>(),
{ "y_pred/BiasAdd:0" },
{ "y_pred/Softmax:0" });
} But still yields the same incorrect results as on develop. |
I see. |
Let's see if I can come up with an automatic tests generator :P |
I think that api usage is fine for acquiring output tensor. |
I have looked into the fix @dboyliao provided for SoftmaxOp (which was indeed broken). But this is an issue at a deeper end as the values are already incorrect when passing them into the SoftmaxOp:
As you see here logit 0's value is too high already. @neil-tan @mbartling If I compile without quantization ops this works fine:
Yields me the exact response that I would get from TF. Is there any way I can see both the quantized and the unquantized state at similar times? I'm logging all input and output tensors right now, but I don't have a reference state to check in which layer something goes wrong which makes it virtually impossible to debug. |
I have put a (I think also helpful for other things) tool here that compares output between uTensor and TF: https://github.com/janjongboom/utensor-fuzzer It currently finds 383 cases in the associated test file where uTensor misclassifies. Output is something like this:
Note that when quantization is disabled all these errors disappear, so it's definitely something there. |
Okay. Good to know that the issue is due to quantization. |
@dboyliao I've added the simplest case that I can think of too here: https://github.com/janjongboom/utensor-fuzzer/tree/super-simple-usecase. Just a two-layer NN (one input layer, then softmax layer) and two examples (one is OK, one is wrong). |
I have an example program here: https://github.com/janjongboom/utensor-misclassifies
The first classification is completely wrong... I added a second one to do sanity checking and that one seems to be OK.
TF result:
uTensor result:
The trained model and classification scripts for TF and uTensor are available in the repo.
The text was updated successfully, but these errors were encountered: