You optionally can use GPFGAN to improve faces, then you'll need to download the model from [here](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth).
This repository's `webgui.py` is a replacement for `kdiff.py` from the guide.
Instructions:
Particularly, following files must exist:
```commandline
:: crate a directory somewhere for stable diffusion and open cmd in it; below the directorty is assumed to be b:\src\sd
:: make sure you are in the right directory; the command must output b:\src\sd1
echo %cd%
:: install torch with CUDA support. See https://pytorch.org/get-started/locally/ for more instructions if this fails.
parser.add_argument("--config",type=str,default="configs/stable-diffusion/v1-inference.yaml",help="path to config which constructs model",)
parser.add_argument("--config",type=str,default=os.path.join(sd_path,"configs/stable-diffusion/v1-inference.yaml"),help="path to config which constructs model",)
parser.add_argument("--ckpt",type=str,default="models/ldm/stable-diffusion-v1/model.ckpt",help="path to checkpoint of model",)
parser.add_argument("--ckpt",type=str,default=os.path.join(sd_path,"models/ldm/stable-diffusion-v1/model.ckpt"),help="path to checkpoint of model",)
parser.add_argument("--gfpgan-model",type=str,help="GFPGAN model file name",default='GFPGANv1.3.pth')
parser.add_argument("--no-half",action='store_true',help="do not switch the model to 16-bit floats")
parser.add_argument("--no-half",action='store_true',help="do not switch the model to 16-bit floats")
parser.add_argument("--no-progressbar-hiding",action='store_true',help="do not hide progressbar in gradio UI (we hide it because it slows down ML if you have hardware accleration in browser)")
parser.add_argument("--no-progressbar-hiding",action='store_true',help="do not hide progressbar in gradio UI (we hide it because it slows down ML if you have hardware accleration in browser)")
parser.add_argument("--max-batch-count",type=int,default=16,help="maximum batch count value for the UI")
parser.add_argument("--max-batch-count",type=int,default=16,help="maximum batch count value for the UI")