756 Commits (7edd58d90dd08f68fab5ff84d26dedd0eb85cae3)

Author SHA1 Message Date
alg-wiki f0ab972f85
Merge branch 'master' into textual__inversion 3 years ago
alg-wiki bc3e183b73
Textual Inversion: Preprocess and Training will only pick-up image files 3 years ago
AUTOMATIC 727e4d1086 no to different messages plus fix using != to compare to None 3 years ago
AUTOMATIC1111 b3d3b335cf
Merge pull request #2131 from ssysm/upstream-master
Add VAE Path Arguments
3 years ago
AUTOMATIC 39919c40dd add eta noise seed delta option 3 years ago
ssysm af62ad4d25 change vae loading method 3 years ago
C43H66N12O12S2 ed769977f0 add swinir v2 support 3 years ago
C43H66N12O12S2 ece27fe989 Add files via upload 3 years ago
C43H66N12O12S2 3e7a981194 remove functorch 3 years ago
C43H66N12O12S2 623251ce2b allow pascal onwards 3 years ago
Vladimir Repin 9d33baba58 Always show previous mask and fix extras_send dest 3 years ago
hentailord85ez d5c14365fd Add back in output hidden states parameter 3 years ago
hentailord85ez 460bbae587 Pad beginning of textual inversion embedding 3 years ago
hentailord85ez b340439586 Unlimited Token Works
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
3 years ago
RW21 f347ddfd80 Remove max_batch_count from ui.py 3 years ago
alg-wiki 7a20f914ed Custom Width and Height 3 years ago
alg-wiki 6ad3a53e36 Fixed progress bar output for epoch 3 years ago
alg-wiki ea00c1624b Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 3 years ago
AUTOMATIC 8f1efdc130 --no-half-vae pt2 3 years ago
alg-wiki 04c745ea4f
Custom Width and Height 3 years ago
AUTOMATIC 7349088d32 --no-half-vae 3 years ago
JC_Array 2f94331df2 removed change in last commit, simplified to adding the visible argument to process_caption_deepbooru and it set to False if deepdanbooru argument is not set 3 years ago
alg-wiki 4ee7519fc2
Fixed progress bar output for epoch 3 years ago
JC_Array 8ec069e64d removed duplicate run_preprocess.click by creating run_preprocess_inputs list and appending deepbooru variable to input list if in scope 3 years ago
alg-wiki 3110f895b2
Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 3 years ago
brkirch 8acc901ba3 Newer versions of PyTorch use TypedStorage instead
Pytorch 1.13 and later will rename _TypedStorage to TypedStorage, so check for TypedStorage and use _TypedStorage if it is not available. Currently this is needed so that nightly builds of PyTorch work correctly.
3 years ago
JC_Array 1f92336be7 refactored the deepbooru module to improve speed on running multiple interogations in a row. Added the option to generate deepbooru tags for textual inversion preproccessing. 3 years ago
ssysm 6fdad291bd Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master 3 years ago
ssysm cc92dc1f8d add vae path args 3 years ago
AUTOMATIC a65476718f add DoubleStorage to list of allowed classes for pickle 3 years ago
AUTOMATIC 8d340cfb88 do not add clip skip to parameters if it's 1 or 0 3 years ago
Fampai 1824e9ee3a Removed unnecessary tmp variable 3 years ago
Fampai ad3ae44108 Updated code for legibility 3 years ago
Fampai ec2bd9be75 Fix issues with CLIP ignore option name change 3 years ago
Fampai a14f7bf113 Corrected CLIP Layer Ignore description and updated its range to the max possible 3 years ago
Fampai e59c66c008 Optimized code for Ignoring last CLIP layers 3 years ago
AUTOMATIC 6c383d2e82 show model selection setting on top of page 3 years ago
Artem Zagidulin 9ecea0a8d6 fix missing png info when Extras Batch Process 3 years ago
AUTOMATIC 875ddfeecf added guard for torch.load to prevent loading pickles with unknown content 3 years ago
AUTOMATIC 9d1138e294 fix typo in filename for ESRGAN arch 3 years ago
AUTOMATIC e6e8cabe0c change up #2056 to make it work how i want it to plus make xy plot write correct values to images 3 years ago
William Moorehouse 594cbfd8fb Sanitize infotext output (for now) 3 years ago
William Moorehouse 006791c13d Fix grabbing the model name for infotext 3 years ago
William Moorehouse d6d10a37bf Added extended model details to infotext 3 years ago
AUTOMATIC 542a3d3a4a fix btoken hypernetworks in XY plot 3 years ago
AUTOMATIC 77a719648d fix logic error in #1832 3 years ago
AUTOMATIC f4578b343d fix model switching not working properly if there is a different yaml config 3 years ago
AUTOMATIC bd833409ac additional changes for saving pnginfo for #1803 3 years ago
Milly 0609ce06c0 Removed duplicate definition model_path 3 years ago
AUTOMATIC 6f6798ddab prevent a possible code execution error (thanks, RyotaK) 3 years ago
AUTOMATIC 0241d811d2 Revert "Fix for Prompts_from_file showing extra textbox."
This reverts commit e2930f9821.
3 years ago
AUTOMATIC ab4fe4f44c hide filenames for save button by default 3 years ago
Tony Beeman cbf6dad02d Handle case where on_show returns the wrong number of arguments 3 years ago
Tony Beeman 86cb16886f Pull Request Code Review Fixes 3 years ago
Tony Beeman e2930f9821 Fix for Prompts_from_file showing extra textbox. 3 years ago
Nicolas Noullet 1ffeb42d38 Fix typo 3 years ago
frostydad ef93acdc73 remove line break 3 years ago
frostydad 03e570886f Fix incorrect sampler name in output 3 years ago
Fampai 122d42687b Fix VRAM Issue by only loading in hypernetwork when selected in settings 3 years ago
AUTOMATIC1111 e00b4df7c6
Merge pull request #1752 from Greendayle/dev/deepdanbooru
Added DeepDanbooru interrogator
3 years ago
aoirusann 14192c5b20 Support `Download` for txt files. 3 years ago
aoirusann 5ab7e88d9b Add `Download` & `Download as zip` 3 years ago
AUTOMATIC 4e569fd888 fixed incorrect message about loading config; thanks anon! 3 years ago
AUTOMATIC c77c89cc83 make main model loading and model merger use the same code 3 years ago
AUTOMATIC 050a6a798c support loading .yaml config with same name as model
support EMA weights in processing (????)
3 years ago
Aidan Holland 432782163a chore: Fix typos 3 years ago
Edouard Leurent 610a7f4e14 Break after finding the local directory of stable diffusion
Otherwise, we may override it with one of the next two path (. or ..) if it is present there, and then the local paths of other modules (taming transformers, codeformers, etc.) wont be found in sd_path/../.

Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1085
3 years ago
AUTOMATIC 3b2141c5fb add 'Ignore last layers of CLIP model' option as a parameter to the infotext 3 years ago
AUTOMATIC e6e42f98df make --force-enable-xformers work without needing --xformers 3 years ago
Fampai 1371d7608b Added ability to ignore last n layers in FrozenCLIPEmbedder 3 years ago
DepFA b458fa48fe Update ui.py 3 years ago
DepFA 15c4278f1a TI preprocess wording
I had to check the code to work out what splitting was 🤷🏿
3 years ago
Greendayle 0ec80f0125
Merge branch 'master' into dev/deepdanbooru 3 years ago
AUTOMATIC 3061cdb7b6 add --force-enable-xformers option and also add messages to console regarding cross attention optimizations 3 years ago
AUTOMATIC f9c5da1592 add fallback for xformers_attnblock_forward 3 years ago
Greendayle 01f8cb4447 made deepdanbooru optional, added to readme, automatic download of deepbooru model 3 years ago
Artem Zagidulin a5550f0213 alternate prompt 3 years ago
C43H66N12O12S2 cc0258aea7 check for ampere without destroying the optimizations. again. 3 years ago
C43H66N12O12S2 017b6b8744 check for ampere 3 years ago
Greendayle 5329d0aba0 Merge branch 'master' into dev/deepdanbooru 3 years ago
AUTOMATIC cfc33f99d4 why did you do this 3 years ago
Greendayle 2e8ba0fa47 fix conflicts 3 years ago
Milly 4f33289d0f Fixed typo 3 years ago
AUTOMATIC 27032c47df restore old opt_split_attention/disable_opt_split_attention logic 3 years ago
AUTOMATIC dc1117233e simplify xfrmers options: --xformers to enable and that's it 3 years ago
AUTOMATIC 7ff1170a2e emergency fix for xformers (continue + shared) 3 years ago
AUTOMATIC1111 48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
xformers attention
3 years ago
C43H66N12O12S2 970de9ee68
Update sd_hijack.py 3 years ago
C43H66N12O12S2 69d0053583
update sd_hijack_opt to respect new env variables 3 years ago
C43H66N12O12S2 ddfa9a9786
add xformers_available shared variable 3 years ago
C43H66N12O12S2 26b459a379
default to split attention if cuda is available and xformers is not 3 years ago
MrCheeze 5f85a74b00 fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped 3 years ago
ddPn08 772db721a5 fix glob path in hypernetwork.py 3 years ago
AUTOMATIC 7001bffe02 fix AND broken for long prompts 3 years ago
AUTOMATIC 77f4237d1c fix bugs related to variable prompt lengths 3 years ago
AUTOMATIC 4999eb2ef9 do not let user choose his own prompt token count limit 3 years ago
Trung Ngo 00117a07ef check specifically for skipped 3 years ago
Trung Ngo 786d9f63aa Add button to skip the current iteration 3 years ago
AUTOMATIC 45cc0ce3c4 Merge remote-tracking branch 'origin/master' 3 years ago
AUTOMATIC 706d5944a0 let user choose his own prompt token count limit 3 years ago