Mi tincidunt elit, id quisque ligula ac diam, amet. Vel etiam suspendisse morbi eleifend faucibus eget vestibulum felis. Dictum quis montes, sit sit. Tellus aliquam enim urna, etiam. Mauris posuere vulputate arcu amet, vitae nisi, tellus tincidunt. At feugiat sapien varius id.
Eget quis mi enim, leo lacinia pharetra, semper. Eget in volutpat mollis at volutpat lectus velit, sed auctor. Porttitor fames arcu quis fusce augue enim. Quis at habitant diam at. Suscipit tristique risus, at donec. In turpis vel et quam imperdiet. Ipsum molestie aliquet sodales id est ac volutpat.
Elit nisi in eleifend sed nisi. Pulvinar at orci, proin imperdiet commodo consectetur convallis risus. Sed condimentum enim dignissim adipiscing faucibus consequat, urna. Viverra purus et erat auctor aliquam. Risus, volutpat vulputate posuere purus sit congue convallis aliquet. Arcu id augue ut feugiat donec porttitor neque. Mauris, neque ultricies eu vestibulum, bibendum quam lorem id. Dolor lacus, eget nunc lectus in tellus, pharetra, porttitor.
"Ipsum sit mattis nulla quam nulla. Gravida id gravida ac enim mauris id. Non pellentesque congue eget consectetur turpis. Sapien, dictum molestie sem tempor. Diam elit, orci, tincidunt aenean tempus."
Tristique odio senectus nam posuere ornare leo metus, ultricies. Blandit duis ultricies vulputate morbi feugiat cras placerat elit. Aliquam tellus lorem sed ac. Montes, sed mattis pellentesque suscipit accumsan. Cursus viverra aenean magna risus elementum faucibus molestie pellentesque. Arcu ultricies sed mauris vestibulum.
Morbi sed imperdiet in ipsum, adipiscing elit dui lectus. Tellus id scelerisque est ultricies ultricies. Duis est sit sed leo nisl, blandit elit sagittis. Quisque tristique consequat quam sed. Nisl at scelerisque amet nulla purus habitasse.
Nunc sed faucibus bibendum feugiat sed interdum. Ipsum egestas condimentum mi massa. In tincidunt pharetra consectetur sed duis facilisis metus. Etiam egestas in nec sed et. Quis lobortis at sit dictum eget nibh tortor commodo cursus.
Odio felis sagittis, morbi feugiat tortor vitae feugiat fusce aliquet. Nam elementum urna nisi aliquet erat dolor enim. Ornare id morbi eget ipsum. Aliquam senectus neque ut id eget consectetur dictum. Donec posuere pharetra odio consequat scelerisque et, nunc tortor.
Nulla adipiscing erat a erat. Condimentum lorem posuere gravida enim posuere cursus diam.
Running Comfy UI on RunPod's powerful 50-series Nvidia GPUs like the 5090 can dramatically enhance your workflow. This comprehensive guide shows you how to set up Comfy UI with persistent storage, allowing you to shut down your server when not in use while preserving all your work.
Create Your RunPod Account and Network Storage
Sign up for a RunPod account and fund it with at least $5
Navigate to the storage tab and click "New Network Volume"
Select a region that offers 5090 GPUs (Europe currently has availability)
Name your volume for reference purposes
Choose a conservative initial storage size (50GB recommended) - you can increase it later but cannot decrease it
Click "Create Network Volume"
Deploy Your Pod with the Right Configuration
Go to the Pods page and click "Deploy"
Select your newly created network volume at the top
Choose a 5090 GPU from the Secure Cloud option (preferred over Community Cloud)
Select the PyTorch 2.8 template
Click "Edit Template"
Add port 8188 (Comfy UI's default port)
Click "Set Overrides"
Click "Deploy on Demand"
Setting Up the Environment
Once your pod starts, expand it and connect to JupiterLab
In JupiterLab, open a terminal
Use wget to download the setup script:
wget --content-disposition [script URL]
Verify you're in the workspace directory
Run the script:
bash comfy_ui_setup.sh
Wait approximately 10 minutes for the script to complete installation
Activating Your Environment and Running Comfy UI
Once the script completes, activate the base environment:
source ~/miniconda3/etc/profile.d/conda.sh
Open a new terminal (you should see "base" prefixed)
Activate the Comfy UI environment:
conda activate comfy_ui
Change to the Comfy UI directory:
cd comfy_ui
Start Comfy UI:
python main.py --listen
Return to your RunPod tab, expand your pod, and connect to port 8188
Comfy UI should now be running in your browser
Conclusion
With this setup, you can now harness the power of 5090 GPUs on RunPod for Comfy UI while maintaining persistent storage. The network volume allows you to shut down your server when not in use and resume your work later, saving costs without losing progress. In an upcoming guide, we'll cover how to add models to your installation.