Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Studio2) Refactors SD pipeline to rely on turbine-models pipeline, fixes to LLM, gitignore #2129

Merged
merged 20 commits into from
May 28, 2024

Conversation

monorimet
Copy link
Collaborator

No description provided.

IanNod
IanNod previously approved these changes Apr 24, 2024
Copy link

@IanNod IanNod left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comment but looks good to me

total_time = time.time() - start_time
text_output = f"Total image(s) generation time: {total_time:.4f}sec"
print(f"\n[LOG] {text_output}")
# total_time = time.time() - start_time
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: commented code

Copy link
Contributor

@gpetters94 gpetters94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few small changes, otherwise looks good.

.gitignore Show resolved Hide resolved
requirements.txt Outdated Show resolved Hide resolved
shark/iree_utils/gpu_utils.py Show resolved Hide resolved
gpetters94
gpetters94 previously approved these changes May 28, 2024
Copy link
Contributor

@gpetters94 gpetters94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@monorimet monorimet merged commit 68e9281 into main May 28, 2024
2 checks passed
monorimet added a commit that referenced this pull request May 28, 2024
* Update requirements.txt for iree-turbine (#2130)

* Fix Llama2 on CPU (#2133)

* Filesystem cleanup and custom model fixes (#2127)

* Initial filesystem cleanup

* More filesystem cleanup

* Fix some formatting issues

* Address comments

* Remove IREE pin (fixes exe issue) (#2126)

* Diagnose a build issue

* Remove IREE pin

* Revert the build on pull request change

* Update find links for IREE packages (#2136)

* (Studio2) Refactors SD pipeline to rely on turbine-models pipeline, fixes to LLM, gitignore (#2129)

* Shark Studio SDXL support, HIP driver support, simpler device info, small fixes

* Fixups to llm API/UI and ignore user config files.

* Small fixes for unifying pipelines.

* Update requirements.txt for iree-turbine (#2130)

* Fix Llama2 on CPU (#2133)

* Filesystem cleanup and custom model fixes (#2127)

* Fix some formatting issues

* Remove IREE pin (fixes exe issue) (#2126)

* Update find links for IREE packages (#2136)

* Shark Studio SDXL support, HIP driver support, simpler device info, small fixes

* Abstract out SD pipelines from Studio Webui (WIP)

* Switch from pin to minimum torch version and fix index url

* Fix device parsing.

* Fix linux setup

* Fix custom weights.

---------

Co-authored-by: saienduri <77521230+saienduri@users.noreply.github.com>
Co-authored-by: gpetters-amd <159576198+gpetters-amd@users.noreply.github.com>
Co-authored-by: gpetters94 <gpetters@protonmail.com>

* Remove leftover merge conflict line from setup script. (#2141)

* Add a few requirements for ensured parity with turbine-models requirements. (#2142)

* Add scipy to requirements.

Adds diffusers req and a note for torchsde.

* Update linux setup script.

* Move brevitas install

---------

Co-authored-by: saienduri <77521230+saienduri@users.noreply.github.com>
Co-authored-by: gpetters-amd <159576198+gpetters-amd@users.noreply.github.com>
Co-authored-by: gpetters94 <gpetters@protonmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants