15 Commits (6f18c9b13f06112d6afe9be062fe5308767ea38a)

Author SHA1 Message Date
Max Audron 5eee2ac398 add data-dir flag and set all user data directories based on it 3 years ago
brkirch f4a488f585 Set device for facelib/facexlib and gfpgan
* FaceXLib/FaceLib doesn't pass the device argument to RetinaFace but instead chooses one itself and sets it to a global - in order to use a device other than its internally chosen default it is necessary to manually replace the default value
* The GFPGAN constructor needs the device argument to work with MPS or a CUDA device ID that differs from the default
3 years ago
brkirch e9e2a7ec9a
Merge branch 'master' into cpu-cmdline-opt 3 years ago
AUTOMATIC 6c6ae28bf5 send all three of GFPGAN's and codeformer's models to CPU memory instead of just one for #1283 3 years ago
brkirch b88e4ea7d6
Merge branch 'master' into master 3 years ago
AUTOMATIC 6491b09c24 use existing function for gfpgan 3 years ago
brkirch bdaa36c844 When device is MPS, use CPU for GFPGAN instead
GFPGAN will not work if the device is MPS, so default to CPU instead.
3 years ago
AUTOMATIC d1f098540a remove unwanted formatting/functionality from the PR 3 years ago
d8ahazard 0dce0df1ee Holy $hit.
Yep.

Fix gfpgan_model_arch requirement(s).
Add Upscaler base class, move from images.
Add a lot of methods to Upscaler.
Re-work all the child upscalers to be proper classes.
Add BSRGAN scaler.
Add ldsr_model_arch class, removing the dependency for another repo that just uses regular latent-diffusion stuff.
Add one universal method that will always find and load new upscaler models without having to add new "setup_model" calls. Still need to add command line params, but that could probably be automated.
Add a "self.scale" property to all Upscalers so the scalers themselves can do "things" in response to the requested upscaling size.
Ensure LDSR doesn't get stuck in a longer loop of "upscale/downscale/upscale" as we try to reach the target upscale size.
Add typehints for IDE sanity.
PEP-8 improvements.
Moar.
3 years ago
d8ahazard 740070ea9c Re-implement universal model loading 3 years ago
AUTOMATIC d4205e66fa gfpgan: just download the damn model 3 years ago
AUTOMATIC 843b2b64fc Instance of CUDA out of memory on a low-res batch, even with --opt-split-attention-v1 (found cause) #255 3 years ago
AUTOMATIC 6a9b33c848 codeformer support 3 years ago
AUTOMATIC 595c827bd3 option to unload GFPGAN after using 3 years ago
AUTOMATIC 345028099d split codebase into multiple files; to anyone this affects negatively: sorry 3 years ago