【Stable Diffusion】起動コマンド webui-user.batのwebui.bat違い。どっちを使うべきか?|おすすめの主な引数や起動時に使えるオプション一覧・引数一覧

Stable diffusion stability ai |Photoone(フォトーン) stable diffusion

Stable diffusionの起動コマンドにはwebui-user.batwebui.batの2つがあります。

解説しているWebサイトや動画によって起動コマンドがバラバラ、、どっちを使えばいいの?と疑問に思う方もいるかと思います。

ここでは、webui-user.batwebui.batはどちらを使うべきか?それぞれで何が違うのかといった点について簡単に解説しています。


webui-user.batのwebui.batはどちらを使うべきか?

初めに結論ですが、起動するときは基本的にwebui-user.batを使用してください。webui-user.batが一般的な使用時に起動するために用意されたコマンドです。

webui-user.batを使用するとVRAM(GPUのメモリ)の使用容量を下げる–medvramやVRAMの消費を下げ、更に処理を高速化する–xformersやといった引数(オプション)を指定して起動することができます。

使用できるオプションはかなりたくさん用意されています。この記事の下の方に一覧を記載しています。



webui-user.batのwebui.batの違い

webui-user.batとwebui.batはどちらもコマンドラインからStable Diffusionを起動するためのコマンドです。

webui.batは開発者用のコマンドで、プログラムを起動するためにベースとなるコードを実行します。このため、開発者でない一般のユーザーがwebui.batのバッチファイルを編集するといった作業は基本的にありません

webui-user.batはその処理の中で、stable diffusionのコア部分を起動するwebui.batを実行し、更にユーザーオプションを追加して実行するコマンドです。起動時に参照するバッチファイルwebui-user.batはユーザーが追記などの編集をすることが想定されています。

それぞれを起動するためのバッチファイルの中身を見ると、開発者向けなのかユーザー向けなのかがわかります。


webui-user.bat(一般ユーザー向けファイルの中身)

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=

call webui.bat


webui.bat(開発者向けファイルの中身)

@echo off

if exist webui.settings.bat (
    call webui.settings.bat
)

if not defined PYTHON (set PYTHON=python)
if defined GIT (set "GIT_PYTHON_GIT_EXECUTABLE=%GIT%")
if not defined VENV_DIR (set "VENV_DIR=%~dp0%venv")

set SD_WEBUI_RESTART=tmp/restart
set ERROR_REPORTING=FALSE

mkdir tmp 2>NUL

%PYTHON% -c "" >tmp/stdout.txt 2>tmp/stderr.txt
if %ERRORLEVEL% == 0 goto :check_pip
echo Couldn't launch python
goto :show_stdout_stderr

:check_pip
%PYTHON% -mpip --help >tmp/stdout.txt 2>tmp/stderr.txt
if %ERRORLEVEL% == 0 goto :start_venv
if "%PIP_INSTALLER_LOCATION%" == "" goto :show_stdout_stderr
%PYTHON% "%PIP_INSTALLER_LOCATION%" >tmp/stdout.txt 2>tmp/stderr.txt
if %ERRORLEVEL% == 0 goto :start_venv
echo Couldn't install pip
goto :show_stdout_stderr

:start_venv
if ["%VENV_DIR%"] == ["-"] goto :skip_venv
if ["%SKIP_VENV%"] == ["1"] goto :skip_venv

dir "%VENV_DIR%\Scripts\Python.exe" >tmp/stdout.txt 2>tmp/stderr.txt
if %ERRORLEVEL% == 0 goto :activate_venv

for /f "delims=" %%i in ('CALL %PYTHON% -c "import sys; print(sys.executable)"') do set PYTHON_FULLNAME="%%i"
echo Creating venv in directory %VENV_DIR% using python %PYTHON_FULLNAME%
%PYTHON_FULLNAME% -m venv "%VENV_DIR%" >tmp/stdout.txt 2>tmp/stderr.txt
if %ERRORLEVEL% == 0 goto :activate_venv
echo Unable to create venv in directory "%VENV_DIR%"
goto :show_stdout_stderr

:activate_venv
set PYTHON="%VENV_DIR%\Scripts\Python.exe"
echo venv %PYTHON%

:skip_venv
if [%ACCELERATE%] == ["True"] goto :accelerate
goto :launch

:accelerate
echo Checking for accelerate
set ACCELERATE="%VENV_DIR%\Scripts\accelerate.exe"
if EXIST %ACCELERATE% goto :accelerate_launch

:launch
%PYTHON% launch.py %*
if EXIST tmp/restart goto :skip_venv
pause
exit /b

:accelerate_launch
echo Accelerating
%ACCELERATE% launch --num_cpu_threads_per_process=6 launch.py
if EXIST tmp/restart goto :skip_venv
pause
exit /b

:show_stdout_stderr

echo.
echo exit code: %errorlevel%

for /f %%i in ("tmp\stdout.txt") do set size=%%~zi
if %size% equ 0 goto :show_stderr
echo.
echo stdout:
type tmp\stdout.txt

:show_stderr
for /f %%i in ("tmp\stderr.txt") do set size=%%~zi
if %size% equ 0 goto :show_stderr
echo.
echo stderr:
type tmp\stderr.txt

:endofscript

echo.
echo Launch unsuccessful. Exiting.
pause



webui-user.batの使い方

webui-user.batで起動時にオプションを適用するには、「stable-diffusion-ui」のフォルダ内にある「webui-user.bat」というバッチファイルをメモ帳やVScodeなどのエディタで開きます

※1 webui-user.sh(シェルファイル)webui.batではありません
※2 ファイルをダブルクリックするとターミナルが起動し、stable diffusionを起動してしまいます



デフォルトでは以下のような状態になっています。

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=

call webui.bat


6行目にある set COMMANDLINE_ARGS= がオプションとなる引数を指定する場所です。


ここにイコールの後にオプションを記述します。

set COMMANDLINE_ARGS=--xformers 


複数のオプションを指定する場合は半角スペースをあけて記述します。

set COMMANDLINE_ARGS=--allow-code --xformers --skip-torch-cuda-test --no-half-vae --api --ckpt-dir A:\\stable-diffusion-checkpoints 


ワンラインで記述すると長くなり見にくい場合があります。そういったときは半角スペース ^(キャレット)で改行することができます。

set COMMANDLINE_ARGS= ^
  --allow-code ^
  --xformers ^
  --skip-torch-cuda-test ^
  --no-half-vae ^
  --api ^
  --ckpt-dir A:\\stable-diffusion-checkpoints 


^はバッチファイル(拡張子.bat)で改行するためのコードです。^だけを記述すると次の行とつながってしまうので、半角スペースを入れることで分離させます。

末尾には^をつけないようにしてください


参考(バッチファイルの改行)

以下は同じ記述です。

echo

 ↓↑

e^
c^
h^
o



webui-user.batで使用できる主なオプション(引数)

webui-user.batで使用できる主な(おすすめ?よく使われている?)のオプションには下記のようなものがあります。


–xformers |画像の高速生成、メモリ消費量を下げる

–xformersは画像の生成速度を著しく向上させるオプションです。しかも、VRAMの使用量も削減することができます。

これは現在ではStable Diffusion Uiにデフォルトで実装されているため、特に指定する必要はありません。

使用する際は下記2点に注意してください。

xformers使用時の注意点
  1. NVIDIA GPUのみで利用可能
  2. エクステンションによっては使用できない


※2023/1/23以降はstable diffusion uiにデフォルトとして実装されたため、手動でビルドする必要はありません

As of January 23, 2023, neither Windows nor Linux users are required to manually build the Xformers library. This change was implemented when WebUI transitioned from a user-built wheel to an official wheel. You can view the package upgrades and other details of this update in this PR.

(参考)AUTOMATIC1111/stable-diffusion-webui Xformers


–theme dark |ダークモードにする

Stable Diffusion Uiはデフォルトではライトモードになっています。オプションで--theme darkを指定することで、起動時にダークモードにすることができます。

ちなみに、ライトモードを明示する場合は、--theme lightを指定します。

なお、ダークモードとライトモードの切り替えはブラウザのUI上でも行うことができます。


–opt-sdp-attention | 画像生成の高速化(xformersよりも早い)

--opt-sdp-attentionはxformersと同じく画像の生成速度を上げます。

xformersでいいじゃん。と思うかもしれませんが、以下2点で優れています。

–opt-sdp-attentionと–xformersの比較
  1. xformersよりも早い
  2. NVIDIA GPU以外でも使える

ただし、より多くのVRAMを必要とするというデメリットもあります。GPUのVRAMに余裕がある場合は指定するのがおすすめです。



webui-user.batで使用できるオプション(引数)の一覧

webui-user.batで使用できる主なオプションの一覧は下記になります。


Argument Command内容
--opt-sdp-attentionMay results in faster speeds than using xFormers on some systems but requires more VRAM. (non-deterministic)
--opt-sdp-no-mem-attentionMay results in faster speeds than using xFormers on some systems but requires more VRAM. (deterministic, slightly slower than --opt-sdp-attention and uses more VRAM)
--xformersUse xFormers library. Great improvement to memory consumption and speed. Nvidia GPUs only. (deterministic as of 0.0.19 [webui uses 0.0.20 as of 1.4.0])
--force-enable-xformersEnables xFormers regardless of whether the program thinks you can run it or not. Do not report bugs you get running this.
--opt-split-attentionCross attention layer optimization significantly reducing memory use for almost no cost (some report improved performance with it). Black magic.
On by default for torch.cuda, which includes both NVidia and AMD cards.
--disable-opt-split-attentionDisables the optimization above.
--opt-sub-quad-attentionSub-quadratic attention, a memory efficient Cross Attention layer optimization that can significantly reduce required memory, sometimes at a slight performance cost. Recommended if getting poor performance or failed generations with a hardware/software configuration that xFormers doesn’t work for. On macOS, this will also allow for generation of larger images.
--opt-split-attention-v1Uses an older version of the optimization above that is not as memory hungry (it will use less VRAM, but will be more limiting in the maximum size of pictures you can make).
--medvramMakes the Stable Diffusion model consume less VRAM by splitting it into three parts – cond (for transforming text into numerical representation), first_stage (for converting a picture into latent space and back), and unet (for actual denoising of latent space) and making it so that only one is in VRAM at all times, sending others to CPU RAM. Lowers performance, but only by a bit – except if live previews are enabled.
--lowvramAn even more thorough optimization of the above, splitting unet into many modules, and only one module is kept in VRAM. Devastating for performance.
*do-not-batch-cond-uncondOnly before 1.6.0: prevents batching of positive and negative prompts during sampling, which essentially lets you run at 0.5 batch size, saving a lot of memory. Decreases performance. Not a command line option, but an optimization implicitly enabled by using --medvram or --lowvram. In 1.6.0, this optimization is not enabled by any command line flags, and is instead enabled by default. It can be disabled in settings, Batch cond/uncond option in Optimizations category.
--always-batch-cond-uncondOnly before 1.6.0: disables the optimization above. Only makes sense together with --medvram or --lowvram. In 1.6.0, this command line flag does nothing.
--opt-channelslastChanges torch memory type for stable diffusion to channels last. Effects not closely studied.
--upcast-samplingFor Nvidia and AMD cards normally forced to run with --no-halfshould improve generation speed.



上記以外にも様々なオプションが用いされています。

Argument CommandValueDefaultDescription
CONFIGURATION
-h, –helpNoneFalseShow this help message and exit.
–exitTerminate after installation
–data-dirDATA_DIR./base path where all user data is stored
–configCONFIGconfigs/stable-diffusion/v1-inference.yamlPath to config which constructs model.
–ckptCKPTmodel.ckptPath to checkpoint of Stable Diffusion model; if specified, this checkpoint will be added to the list of checkpoints and loaded.
–ckpt-dirCKPT_DIRNonePath to directory with Stable Diffusion checkpoints.
–no-download-sd-modelNoneFalseDon’t download SD1.5 model even if no model is found.
–do-not-download-clipNoneFalsedo not download CLIP model even if it’s not included in the checkpoint
–vae-dirVAE_PATHNonePath to Variational Autoencoders model
–vae-pathVAE_PATHNoneCheckpoint to use as VAE; setting this argument
–gfpgan-dirGFPGAN_DIRGFPGAN/GFPGAN directory.
–gfpgan-modelGFPGAN_MODELGFPGAN model file name.
–codeformer-models-pathCODEFORMER_MODELS_PATHmodels/Codeformer/Path to directory with codeformer model file(s).
–gfpgan-models-pathGFPGAN_MODELS_PATHmodels/GFPGANPath to directory with GFPGAN model file(s).
–esrgan-models-pathESRGAN_MODELS_PATHmodels/ESRGANPath to directory with ESRGAN model file(s).
–bsrgan-models-pathBSRGAN_MODELS_PATHmodels/BSRGANPath to directory with BSRGAN model file(s).
–realesrgan-models-pathREALESRGAN_MODELS_PATHmodels/RealESRGANPath to directory with RealESRGAN model file(s).
–scunet-models-pathSCUNET_MODELS_PATHmodels/ScuNETPath to directory with ScuNET model file(s).
–swinir-models-pathSWINIR_MODELS_PATHmodels/SwinIRPath to directory with SwinIR and SwinIR v2 model file(s).
–ldsr-models-pathLDSR_MODELS_PATHmodels/LDSRPath to directory with LDSR model file(s).
–lora-dirLORA_DIRmodels/LoraPath to directory with Lora networks.
–clip-models-pathCLIP_MODELS_PATHNonePath to directory with CLIP model file(s).
–embeddings-dirEMBEDDINGS_DIRembeddings/Embeddings directory for textual inversion (default: embeddings).
–textual-inversion-templates-dirTEXTUAL_INVERSION_TEMPLATES_DIRtextual_inversion_templatesDirectory with textual inversion templates.
–hypernetwork-dirHYPERNETWORK_DIRmodels/hypernetworks/hypernetwork directory.
–localizations-dirLOCALIZATIONS_DIRlocalizations/Localizations directory.
–styles-fileSTYLES_FILEstyles.csvFilename to use for styles.
–ui-config-fileUI_CONFIG_FILEui-config.jsonFilename to use for UI configuration.
–no-progressbar-hidingNoneFalseDo not hide progress bar in gradio UI (we hide it because it slows down ML if you have hardware acceleration in browser).
–max-batch-countMAX_BATCH_COUNT16Maximum batch count value for the UI.
–ui-settings-fileUI_SETTINGS_FILEconfig.jsonFilename to use for UI settings.
–allow-codeNoneFalseAllow custom script execution from web UI.
–shareNoneFalseUse share=True for gradio and make the UI accessible through their site.
–listenNoneFalseLaunch gradio with 0.0.0.0 as server name, allowing to respond to network requests.
–portPORT7860Launch gradio with given server port, you need root/admin rights for ports < 1024; defaults to 7860 if available.
–hide-ui-dir-configNoneFalseHide directory configuration from web UI.
–freeze-settingsNoneFalsedisable editing settings
–enable-insecure-extension-accessNoneFalseEnable extensions tab regardless of other options.
–gradio-debugNoneFalseLaunch gradio with --debug option.
–gradio-authGRADIO_AUTHNoneSet gradio authentication like username:password; or comma-delimit multiple like u1:p1,u2:p2,u3:p3.
–gradio-auth-pathGRADIO_AUTH_PATHNoneSet gradio authentication file path ex. /path/to/auth/file same auth format as --gradio-auth.
–disable-console-progressbarsNoneFalseDo not output progress bars to console.
–enable-console-promptsNoneFalsePrint prompts to console when generating with txt2img and img2img.
–apiNoneFalseLaunch web UI with API.
–api-authAPI_AUTHNoneSet authentication for API like username:password; or comma-delimit multiple like u1:p1,u2:p2,u3:p3.
–api-logNoneFalseEnable logging of all API requests.
–nowebuiNoneFalseOnly launch the API, without the UI.
–ui-debug-modeNoneFalseDon’t load model to quickly launch UI.
–device-idDEVICE_IDNoneSelect the default CUDA device to use (export CUDA_VISIBLE_DEVICES=0,1 etc might be needed before).
–administratorNoneFalseAdministrator privileges.
–cors-allow-originsCORS_ALLOW_ORIGINSNoneAllowed CORS origin(s) in the form of a comma-separated list (no spaces).
–cors-allow-origins-regexCORS_ALLOW_ORIGINS_REGEXNoneAllowed CORS origin(s) in the form of a single regular expression.
–tls-keyfileTLS_KEYFILENonePartially enables TLS, requires --tls-certfile to fully function.
–tls-certfileTLS_CERTFILENonePartially enables TLS, requires --tls-keyfile to fully function.
–disable-tls-verifyNoneFalseWhen passed, enables the use of self-signed certificates.
–server-nameSERVER_NAMENoneSets hostname of server.
–no-gradio-queueNoneFalseDisables gradio queue; causes the webpage to use http requests instead of websockets; was the default in earlier versions.
–gradio-allowed-pathNoneNoneAdd path to Gradio’s allowed_paths; make it possible to serve files from it.
–no-hashingNoneFalseDisable SHA-256 hashing of checkpoints to help loading performance.
–skip-version-checkNoneFalseDo not check versions of torch and xformers.
–skip-python-version-checkNoneFalseDo not check versions of Python.
–skip-torch-cuda-testNoneFalseDo not check if CUDA is able to work properly.
–skip-installNoneFalseSkip installation of packages.
–loglevelNoneNonelog level; one of: CRITICAL, ERROR, WARNING, INFO, DEBUG
–log-startupNoneFalselaunch.py argument: print a detailed log of what’s happening at startup
–api-server-stopNoneFalseenable server stop/restart/kill via api
–timeout-keep-aliveint30set timeout_keep_alive for uvicorn
PERFORMANCE
–xformersNoneFalseEnable xformers for cross attention layers.
–force-enable-xformersNoneFalseEnable xformers for cross attention layers regardless of whether the checking code thinks you can run it; do not make bug reports if this fails to work.
–xformers-flash-attentionNoneFalseEnable xformers with Flash Attention to improve reproducibility (supported for SD2.x or variant only).
–opt-sdp-attentionNoneFalseEnable scaled dot product cross-attention layer optimization; requires PyTorch 2.*
–opt-sdp-no-mem-attentionFalseNoneEnable scaled dot product cross-attention layer optimization without memory efficient attention, makes image generation deterministic; requires PyTorch 2.*
–opt-split-attentionNoneFalseForce-enables Doggettx’s cross-attention layer optimization. By default, it’s on for CUDA-enabled systems.
–opt-split-attention-invokeaiNoneFalseForce-enables InvokeAI’s cross-attention layer optimization. By default, it’s on when CUDA is unavailable.
–opt-split-attention-v1NoneFalseEnable older version of split attention optimization that does not consume all VRAM available.
–opt-sub-quad-attentionNoneFalseEnable memory efficient sub-quadratic cross-attention layer optimization.
–sub-quad-q-chunk-sizeSUB_QUAD_Q_CHUNK_SIZE1024Query chunk size for the sub-quadratic cross-attention layer optimization to use.
–sub-quad-kv-chunk-sizeSUB_QUAD_KV_CHUNK_SIZENoneKV chunk size for the sub-quadratic cross-attention layer optimization to use.
–sub-quad-chunk-thresholdSUB_QUAD_CHUNK_THRESHOLDNoneThe percentage of VRAM threshold for the sub-quadratic cross-attention layer optimization to use chunking.
–opt-channelslastNoneFalseEnable alternative layout for 4d tensors, may result in faster inference only on Nvidia cards with Tensor cores (16xx and higher).
–disable-opt-split-attentionNoneFalseForce-disables cross-attention layer optimization.
–disable-nan-checkNoneFalseDo not check if produced images/latent spaces have nans; useful for running without a checkpoint in CI.
–use-cpu{all, sd, interrogate, gfpgan, bsrgan, esrgan, scunet, codeformer}NoneUse CPU as torch device for specified modules.
–no-halfNoneFalseDo not switch the model to 16-bit floats.
–precision{full,autocast}autocastEvaluate at this precision.
–no-half-vaeNoneFalseDo not switch the VAE model to 16-bit floats.
–upcast-samplingNoneFalseUpcast sampling. No effect with --no-half. Usually produces similar results to --no-half with better performance while using less memory.
–medvramNoneFalseEnable Stable Diffusion model optimizations for sacrificing a some performance for low VRAM usage.
–medvram-sdxlNoneFalseenable --medvram optimization just for SDXL models
–lowvramNoneFalseEnable Stable Diffusion model optimizations for sacrificing a lot of speed for very low VRAM usage.
–lowramNoneFalseLoad Stable Diffusion checkpoint weights to VRAM instead of RAM.
–disable-model-loading-ram-optimizationNoneFalsedisable an optimization that reduces RAM use when loading a model
FEATURES
–autolaunchNoneFalseOpen the web UI URL in the system’s default browser upon launch.
–themeNoneUnsetOpen the web UI with the specified theme (light or dark). If not specified, uses the default browser theme.
–use-textbox-seedNoneFalseUse textbox for seeds in UI (no up/down, but possible to input long seeds).
–disable-safe-unpickleNoneFalseDisable checking PyTorch models for malicious code.
–ngrokNGROKNonengrok authtoken, alternative to gradio --share.
–ngrok-regionNGROK_REGIONusThe region in which ngrok should start.
–ngrok-optionsNGROK_OPTIONSNoneThe options to pass to ngrok in JSON format, e.g.: {"authtoken_from_env":true, "basic_auth":"user:password", "oauth_provider":"google", "oauth_allow_emails":"user@asdf.com"}
–update-checkNoneNoneOn startup, notifies whether or not your web UI version (commit) is up-to-date with the current master branch.
–update-all-extensionsNoneNoneOn startup, it pulls the latest updates for all extensions you have installed.
–reinstall-xformersNoneFalseForce-reinstall xformers. Useful for upgrading – but remove it after upgrading or you’ll reinstall xformers perpetually.
–reinstall-torchNoneFalseForce-reinstall torch. Useful for upgrading – but remove it after upgrading or you’ll reinstall torch perpetually.
–testsTESTSFalseRun test to validate web UI functionality, see wiki topic for more details.
–no-testsNoneFalseDo not run tests even if --tests option is specified.
–dump-sysinfoNoneFalselaunch.py argument: dump limited sysinfo file (without information about extensions, options) to disk and quit
–disable-all-extensionsNoneFalsedisable all non-built-in extensions from running
–disable-extra-extensionsNoneFalsedisable all extensions from running
DEFUNCT OPTIONS
–show-negative-promptNoneFalseNo longer has an effect.
–deepdanbooruNoneFalseNo longer has an effect.
–unload-gfpganNoneFalseNo longer has an effect.
–gradio-img2img-toolGRADIO_IMG2IMG_TOOLNoneNo longer has an effect.
–gradio-inpaint-toolGRADIO_INPAINT_TOOLNoneNo longer has an effect.
–gradio-queueNoneFalseNo longer has an effect.
–add-stop-routeNoneFalseNo longer has an effect.
–always-batch-cond-uncondNoneFalseNo longer has an effect, move into UI under Setting > Optimizations


(参考)






コメント

タイトルとURLをコピーしました