Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detecting Supported Layers #35

Open
Proryanator opened this issue Apr 7, 2024 · 5 comments
Open

Detecting Supported Layers #35

Proryanator opened this issue Apr 7, 2024 · 5 comments

Comments

@Proryanator
Copy link

Proryanator commented Apr 7, 2024

Hey there! This repo is super amazing, thanks for putting all these findings together.

Was reading this sub-section (link below) on which layers are unsupported, and I had an idea on how to programmatically identify them.

https://github.com/hollance/neural-engine/blob/master/docs/unsupported-layers.md

This part here: "S → U → S → U → S → U" about swapping supported/unsupported layers made me wonder. In theory, if we take a layer X from a CoreML model (or just some CoreML op) we have that we do not yet know can run on the ANE, and we make a dummy model that is built solely of that layer, i.e. X -> X -> X ... and we set the compute unit to be CPU/ANE only, this would encourage the Neural Engine to run this model on 1 compute unit since that would be efficient. So, in theory, if you set the compute unit to be CPU && ANE and you see that the layer runs on the ANE only, then you will have identified that this op is ANE compatible? 🧐 could even record stats as well.

I'd like to test this theory but wanted to run this by you. Thinking this could be a way to, given a model, programmatically piecemeal individual layers, build a simple repeated layer model, and produce a chart of whether the layer is CPU/GPU/ANE supported (maybe even with statistics). Maybe even a chart that can be publically available of ops and their supported compute units (since something like that does not exist today to my knowledge).

Would help with identifying areas where a layer could be swapped out/modified to encourage running the model more efficiently on the ANE.

Any thoughts would be appreciated! 😊

@hollance
Copy link
Owner

hollance commented Apr 7, 2024

That sounds like an interesting approach!

@Proryanator
Copy link
Author

Thanks! Will give this a shot at some point and share what I find.

@Proryanator
Copy link
Author

Proryanator commented May 25, 2024

@hollance thanks for writing those CoreML survival guides, I find them to be an invaluable supplement for apple's own documentation. I've started working on this, calling it "anetools" 👊

@Proryanator
Copy link
Author

Proryanator commented May 26, 2024

Using your section of "Model Surgery" I was able to successfully isolate the first layer programmatically in a small Neural Network! Cool thing is I could run the performance tab on it too.

Going to work on making this generically possible for all layers in the model, which will require a bit more work than just input/output name matching (shapes and datatypes too which will be a bit difficult).

Off the top of your head, do you know how I could programmatically map the layer.output shape (which is not as straightforward as using layer.output.shape, probably more complex) to the equivalent model.output_description.type?

Screenshot 2024-05-25 at 11 17 39 PM

@Proryanator Proryanator changed the title Thoughts on "Detecting" Supported Layers Detecting Supported Layers May 26, 2024
@hollance
Copy link
Owner

Honestly, it's been too long since I did anything with Core ML so I don't know off the top of my head.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants