101 Commits (03d7b394539558f6f560155d87a4fc66eb675e30)

Author SHA1 Message Date
AUTOMATIC 505ec7e4d9 cleanup some unneeded imports for hijack files 3 years ago
AUTOMATIC 7dbfd8a7d8 do not replace entire unet for the resolution hack 3 years ago
AUTOMATIC1111 2641d1b83b
Merge pull request #4978 from aliencaocao/support_any_resolution
Patch UNet Forward to support resolutions that are not multiples of 64
3 years ago
AUTOMATIC 0d21624cee move #5216 to the extension 3 years ago
AUTOMATIC 89e1df013b Merge remote-tracking branch 'wywywywy/autoencoder-hijack' 3 years ago
AUTOMATIC1111 a2feaa95fc
Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixes
Use devices.autocast() and fix MPS randn issues
3 years ago
SmirkingFace da698ca92e Fixed AttributeError where openaimodel is not found 3 years ago
wywywywy 36c3613d16
Add autoencoder to sd_hijack 3 years ago
brkirch 98ca437edf Refactor and instead check if mps is being used, not availability 3 years ago
AUTOMATIC b48b7999c8 Merge remote-tracking branch 'flamelaw/master' 3 years ago
Billy Cao 349f0461ec
Merge branch 'master' into support_any_resolution 3 years ago
AUTOMATIC 64c7b7975c restore hypernetworks to seemingly working state 3 years ago
AUTOMATIC ce6911158b Add support Stable Diffusion 2.0 3 years ago
Billy Cao adb6cb7619 Patch UNet Forward to support resolutions that are not multiples of 64
Also modifed the UI to no longer step in 64
3 years ago
flamelaw bd68e35de3 Gradient accumulation, autocast fix, new latent sampling method, etc 3 years ago
killfrenzy96 17e4432820 cleanly undo circular hijack #4818 3 years ago
AUTOMATIC c62d17aee3 use the new devices.has_mps() function in register_buffer for DDIM/PLMS fix for OSX 3 years ago
AUTOMATIC 7ba3923d5b move DDIM/PLMS fix for OSX out of the file with inpainting code. 3 years ago
Jairo Correa af758e97fa Unload sd_model before loading the other 3 years ago
AUTOMATIC 2b91251637 removed aesthetic gradients as built-in
added support for extensions
3 years ago
AUTOMATIC 9286fe53de make aestetic embedding ciompatible with prompts longer than 75 tokens 3 years ago
AUTOMATIC 7d6b388d71 Merge branch 'ae' 3 years ago
C43H66N12O12S2 73b5dbf72a Update sd_hijack.py 3 years ago
C43H66N12O12S2 786ed49922 use legacy attnblock 3 years ago
MalumaDev 9324cdaa31 ui fix, re organization of the code 3 years ago
MalumaDev e4f8b5f00d ui fix 3 years ago
MalumaDev 523140d780 ui fix 3 years ago
MalumaDev b694bba39a Merge remote-tracking branch 'origin/test_resolve_conflicts' into test_resolve_conflicts 3 years ago
MalumaDev 9325c85f78 fixed dropbox update 3 years ago
MalumaDev 97ceaa23d0
Merge branch 'master' into test_resolve_conflicts 3 years ago
C43H66N12O12S2 529afbf4d7 Update sd_hijack.py 3 years ago
MalumaDev 37d7ffb415 fix to tokens lenght, addend embs generator, add new features to edit the embedding before the generation using text 3 years ago
MalumaDev bb57f30c2d init 3 years ago
AUTOMATIC 429442f4a6 fix iterator bug for #2295 3 years ago
hentailord85ez 80f3cf2bb2 Account when lines are mismatched 3 years ago
brkirch 98fd5cde72 Add check for psutil 3 years ago
brkirch c0484f1b98 Add cross-attention optimization from InvokeAI
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS)
* Add command line option for it
* Make it default when CUDA is unavailable
3 years ago
AUTOMATIC 873efeed49 rename hypernetwork dir to hypernetworks to prevent clash with an old filename that people who use zip instead of git clone will have 3 years ago
AUTOMATIC 5de806184f Merge branch 'master' into hypernetwork-training 3 years ago
hentailord85ez 5e2627a1a6
Comma backtrack padding (#2192)
Comma backtrack padding
3 years ago
C43H66N12O12S2 623251ce2b allow pascal onwards 3 years ago
hentailord85ez d5c14365fd Add back in output hidden states parameter 3 years ago
hentailord85ez 460bbae587 Pad beginning of textual inversion embedding 3 years ago
hentailord85ez b340439586 Unlimited Token Works
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
3 years ago
Fampai 1824e9ee3a Removed unnecessary tmp variable 3 years ago
Fampai ad3ae44108 Updated code for legibility 3 years ago
Fampai e59c66c008 Optimized code for Ignoring last CLIP layers 3 years ago
Fampai 1371d7608b Added ability to ignore last n layers in FrozenCLIPEmbedder 3 years ago
AUTOMATIC 3061cdb7b6 add --force-enable-xformers option and also add messages to console regarding cross attention optimizations 3 years ago
C43H66N12O12S2 cc0258aea7 check for ampere without destroying the optimizations. again. 3 years ago