Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Porting network caffe on mobile caffe. #63

Open
eaangi opened this issue Jun 6, 2019 · 0 comments
Open

Porting network caffe on mobile caffe. #63

eaangi opened this issue Jun 6, 2019 · 0 comments

Comments

@eaangi
Copy link

eaangi commented Jun 6, 2019

Hello,
I want to try your lib-mobile for my thesis and have some questions. I'm glad you can help me.
I manage my model in MATLAB and it works.
I tried your Android example and it works fine.

Later I tried to:

  • I replaced your model with mine (after changing it to protobin) and copied the weights and the coffee net.

  • the first difference is that my model has size 3 as an input layer.
    layer {
       name: "input"
       type: "Input"
       top: "color_crop"
       input_param {
         shape {
           dim: 1
           dim: 3
           dim: 128
           dim: 128
         }
       }
    }

Before proceeding to the inference, I transformed the image acquired by the camera into a matrix w x h x 3 (rgb) in order to respect the input layer.

But when I make an inference it all crashes, because the input must be a bytearray.

Where am I wrong? Why is there this difference compared to caffe for pc?
should the library turn the image not into a matrix but into a bytearray array with dim (wxhx3) ?

Thanks in advance for your help..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant