AUTOMATIC
59146621e2
better support for xformers flash attention on older versions of torch
3 years ago
Takuma Mori
3262e825cc
add --xformers-flash-attention option & impl
3 years ago
AUTOMATIC
40ff6db532
extra networks UI
...
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
3 years ago
brkirch
c18add68ef
Added license
3 years ago
brkirch
b95a4c0ce5
Change sub-quad chunk threshold to use percentage
3 years ago
brkirch
d782a95967
Add Birch-san's sub-quadratic attention implementation
3 years ago
brkirch
35b1775b32
Use other MPS optimization for large q.shape[0] * q.shape[1]
...
Check if q.shape[0] * q.shape[1] is 2**18 or larger and use the lower memory usage MPS optimization if it is. This should prevent most crashes that were occurring at certain resolutions (e.g. 1024x1024, 2048x512, 512x2048).
Also included is a change to check slice_size and prevent it from being divisible by 4096 which also results in a crash. Otherwise a crash can occur at 1024x512 or 512x1024 resolution.
3 years ago
AUTOMATIC
505ec7e4d9
cleanup some unneeded imports for hijack files
3 years ago
AUTOMATIC
7dbfd8a7d8
do not replace entire unet for the resolution hack
3 years ago
Billy Cao
adb6cb7619
Patch UNet Forward to support resolutions that are not multiples of 64
...
Also modifed the UI to no longer step in 64
3 years ago
Cheka
2fd7935ef4
Remove wrong self reference in CUDA support for invokeai
3 years ago
C43H66N12O12S2
c71008c741
Update sd_hijack_optimizations.py
3 years ago
C43H66N12O12S2
84823275e8
readd xformers attnblock
3 years ago
C43H66N12O12S2
2043c4a231
delete xformers attnblock
3 years ago
brkirch
861db783c7
Use apply_hypernetwork function
3 years ago
brkirch
574c8e554a
Add InvokeAI and lstein to credits, add back CUDA support
3 years ago
brkirch
98fd5cde72
Add check for psutil
3 years ago
brkirch
c0484f1b98
Add cross-attention optimization from InvokeAI
...
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS)
* Add command line option for it
* Make it default when CUDA is unavailable
3 years ago
AUTOMATIC
873efeed49
rename hypernetwork dir to hypernetworks to prevent clash with an old filename that people who use zip instead of git clone will have
3 years ago
AUTOMATIC
530103b586
fixes related to merge
3 years ago
AUTOMATIC
948533950c
replace duplicate code with a function
3 years ago
C43H66N12O12S2
3e7a981194
remove functorch
3 years ago
Fampai
122d42687b
Fix VRAM Issue by only loading in hypernetwork when selected in settings
3 years ago
AUTOMATIC
e6e42f98df
make --force-enable-xformers work without needing --xformers
3 years ago
AUTOMATIC
f9c5da1592
add fallback for xformers_attnblock_forward
3 years ago
AUTOMATIC
dc1117233e
simplify xfrmers options: --xformers to enable and that's it
3 years ago
AUTOMATIC
7ff1170a2e
emergency fix for xformers (continue + shared)
3 years ago
AUTOMATIC1111
48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
...
xformers attention
3 years ago
C43H66N12O12S2
69d0053583
update sd_hijack_opt to respect new env variables
3 years ago
C43H66N12O12S2
76a616fa6b
Update sd_hijack_optimizations.py
3 years ago
C43H66N12O12S2
5d54f35c58
add xformers attnblock and hypernetwork support
3 years ago
brkirch
f2055cb1d4
Add hypernetwork support to split cross attention v1
...
* Add hypernetwork support to split_cross_attention_forward_v1
* Fix device check in esrgan_model.py to use devices.device_esrgan instead of shared.device
3 years ago
C43H66N12O12S2
c9cc65b201
switch to the proper way of calling xformers
3 years ago
AUTOMATIC
bad7cb29ce
added support for hypernetworks (???)
3 years ago
C43H66N12O12S2
f174fb2922
add xformers attention
3 years ago
Jairo Correa
ad0cc85d1f
Merge branch 'master' into stable
3 years ago
AUTOMATIC
820f1dc96b
initial support for training textual inversion
3 years ago