brkirch
98fd5cde72
Add check for psutil
3 years ago
brkirch
c0484f1b98
Add cross-attention optimization from InvokeAI
...
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS)
* Add command line option for it
* Make it default when CUDA is unavailable
3 years ago
AUTOMATIC
873efeed49
rename hypernetwork dir to hypernetworks to prevent clash with an old filename that people who use zip instead of git clone will have
3 years ago
AUTOMATIC
5de806184f
Merge branch 'master' into hypernetwork-training
3 years ago
hentailord85ez
5e2627a1a6
Comma backtrack padding ( #2192 )
...
Comma backtrack padding
3 years ago
C43H66N12O12S2
623251ce2b
allow pascal onwards
3 years ago
hentailord85ez
d5c14365fd
Add back in output hidden states parameter
3 years ago
hentailord85ez
460bbae587
Pad beginning of textual inversion embedding
3 years ago
hentailord85ez
b340439586
Unlimited Token Works
...
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
3 years ago
Fampai
1824e9ee3a
Removed unnecessary tmp variable
3 years ago
Fampai
ad3ae44108
Updated code for legibility
3 years ago
Fampai
e59c66c008
Optimized code for Ignoring last CLIP layers
3 years ago
Fampai
1371d7608b
Added ability to ignore last n layers in FrozenCLIPEmbedder
3 years ago
AUTOMATIC
3061cdb7b6
add --force-enable-xformers option and also add messages to console regarding cross attention optimizations
3 years ago
C43H66N12O12S2
cc0258aea7
check for ampere without destroying the optimizations. again.
3 years ago
C43H66N12O12S2
017b6b8744
check for ampere
3 years ago
AUTOMATIC
cfc33f99d4
why did you do this
3 years ago
AUTOMATIC
27032c47df
restore old opt_split_attention/disable_opt_split_attention logic
3 years ago
AUTOMATIC
dc1117233e
simplify xfrmers options: --xformers to enable and that's it
3 years ago
AUTOMATIC1111
48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
...
xformers attention
3 years ago
C43H66N12O12S2
970de9ee68
Update sd_hijack.py
3 years ago
C43H66N12O12S2
26b459a379
default to split attention if cuda is available and xformers is not
3 years ago
MrCheeze
5f85a74b00
fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped
3 years ago
AUTOMATIC
77f4237d1c
fix bugs related to variable prompt lengths
3 years ago
AUTOMATIC
4999eb2ef9
do not let user choose his own prompt token count limit
3 years ago
AUTOMATIC
706d5944a0
let user choose his own prompt token count limit
3 years ago
C43H66N12O12S2
91d66f5520
use new attnblock for xformers path
3 years ago
C43H66N12O12S2
b70eaeb200
delete broken and unnecessary aliases
3 years ago
AUTOMATIC
12c4d5c6b5
hypernetwork training mk1
3 years ago
AUTOMATIC
f7c787eb7c
make it possible to use hypernetworks without opt split attention
3 years ago
C43H66N12O12S2
5e3ff846c5
Update sd_hijack.py
3 years ago
C43H66N12O12S2
5303df2428
Update sd_hijack.py
3 years ago
C43H66N12O12S2
35d6b23162
Update sd_hijack.py
3 years ago
C43H66N12O12S2
2eb911b056
Update sd_hijack.py
3 years ago
Jairo Correa
ad0cc85d1f
Merge branch 'master' into stable
3 years ago
AUTOMATIC
88ec0cf557
fix for incorrect embedding token length calculation (will break seeds that use embeddings, you're welcome!)
...
add option to input initialization text for embeddings
3 years ago
AUTOMATIC
820f1dc96b
initial support for training textual inversion
3 years ago
Jairo Correa
ad1fbbae93
Merge branch 'master' into fix-vram
3 years ago
AUTOMATIC
98cc6c6e74
add embeddings dir
3 years ago
AUTOMATIC
c715ef04d1
fix for incorrect model weight loading for #814
3 years ago
AUTOMATIC
c1c27dad3b
new implementation for attention/emphasis
3 years ago
Jairo Correa
c2d5b29040
Move silu to sd_hijack
3 years ago
Liam
e5707b66d6
switched the token counter to use hidden buttons instead of api call
3 years ago
Liam
5034f7d759
added token counter next to txt2img and img2img prompts
3 years ago
AUTOMATIC
073f6eac22
potential fix for embeddings no loading on AMD cards
3 years ago
guaneec
615b2fc9ce
Fix token max length
3 years ago
AUTOMATIC
254da5d127
--opt-split-attention now on by default for torch.cuda, off for others (cpu and MPS; because the option does not work there according to reports)
3 years ago
AUTOMATIC
1578859305
fix for too large embeddings causing an error
3 years ago
AUTOMATIC
90401d96a6
fix a off by one error with embedding at the start of the sentence
3 years ago
AUTOMATIC
ab38392119
add the part that was missing for word textual inversion checksums
3 years ago