Hunyuan civitai ⚠️. com 上記リンクにHunyuanVideoについてのTIPSがたくさん記載され、定期的に更新されています。 私自身、Long CLIP-L問題に時間を取られ、そこまでHunyuanVideoを触っているわけではないですが、Civitaiの Jan 11, 2025 · First, I have a workflow that extracts videos from a folder, and converts their fps to a target framerate of your choice (default 24 fps for Hunyuan). | last update: 06/03/2025 ⬇️OFFICIAL Image To Video V2 Model is out!⬇️ Just a try to create a multi-concept LoRA for Hunyuan. Generating videos with comfyUI and Hunyuan. Other capablity! Recovery from 70% mask area but still keep consistency! Example: Maksed image: a muscle man with blue eyes: a man in batman suit with eye cover: May 13, 2025 · base on framepack. Jan 28, 2025 · Hello! This is an easy to follow guide on how to create your first Lora for HunYuan video using diffusion-pipe on Runpod. Feb 8, 2025 · How does Hunyuan LoRA work? A Hunyuan LoRA works like LoRA in Stable Diffusion and Flux. Example prompts Feb 5, 2025 · Boreal-HL: Boring Reality Hunyuan LoRA. cloudflare May 21, 2025 · The effect is quite nice and easy to evaluate in training, so I've extended the dataset with videos in following versions for text-to-video models like Wan and Hunyuan, and it is what I am generally using to test LoRA training on new models now. That brings around 8X speed up compared to the original HunyuanVideo with 50 steps. I am still testing different use cases and will add working sample prompts. 1 Fixed compatibility with latest ComfyUI and Use Everywhere nodes. Feb 10, 2025 · Improved Hunyuan Video. It looks like you can get at least to 73 frames and probably a bit more. Feb 6, 2025 · 301 Moved Permanently. Try them out and bring your prompts to life in a whole new way! Check out our guide to Video in the Civitai Generator for more details! Feb 8, 2025 · Rename it to hunyuan_video_vae_bf16-kj. safetensors. The lora is supposed to be the difference between the fast minus the vanilla, so you can use that loaded in the vanilla/fp8. | last update: 06/03/2025 ⬇️OFFICIAL Image To Video V2 Model is out!⬇️ Feb 19, 2025 · The trademark rights of “Tencent Hunyuan” are owned by Tencent or its affiliate. 5 model, is a robust and advanced engine designed for producing both text-to-video and image-to-video content. you can download this video,then drag into comfyui to see the workflow and parameter Jan 27, 2025 · **Don't forget to Like 👍 the model. which you can either train yourself or find on Civitai. 0 and flow_shift=7. Powered by Tencent Hunyuan I’m excited to share the first LoRA I’ve baked right here in Thailand! 🇹🇭 Jan 25, 2025 · Requirements: LeapFusion Lora v2 (544p) or v1 (320p) In short: it uses a special LORA to do the trick. This guide is optimized for Dec 6, 2024 · NOTE: This is WIP, while people work tirelessly in this community to bring things to everyone as fast as humanly possible, please understand that " Apr 21, 2025 · Powered by Tencent Hunyuan Disclaimer Although base HunyuanVideo knows generic anime style well without need for LoRAs and has some knowledge about Studio Ghibli's art style, the latter is not consistent, very prompt-dependant and can sometimes fall back to realistic style. 15-Dec-2024 . The FramePack LoRA training conducted using the Musubi tuner utilized 13 videos for the generation of big swing dance. Step 3: Download LoRA model. LoRA trained on videos with women lifting or pulling down their shirts to reveal their breasts. 19 KB. Powered by Tencent Hunyuan This resource is intended to reproduce the likeness of a real person. 1. Dec 23, 2024 · So fasten your seatbelt and get ready to discover how Hunyuan can elevate your video projects to a whole new level! How to run HunYuan in ComfyUI Installation guide. Dec 21, 2024 · wish you enjoy Hunyuan and thanks to Tencent opensource such a great model and also thanks to Kijai for quick wrapper and comfyui for quick native support. Hoda Hunyua Dec 7, 2024 · To address the variability in linguistic style and length of user-provided prompts, we fine-tune the Hunyuan-Large model as our prompt rewrite model to adapt the original user prompt to model-preferred prompt. We provide two rewrite modes: Normal mode and Master mode, which can be called using different prompts. It is generalized to work with any character/subject Aug 9, 2024 · I have done my own test, for the most people they will select No1 as original image, however, No2 is the original one and No1 is the image with inpainting control with Hunyuan DiT model. Update double blocks version. This will bring more creativity to the community and looking forward to see the Hunyuan I2T performance in Jan 2025! Truest me, more is coming, like new Hunyuan DiT image model. ai’s 2. Dec 24, 2024 · Hunyuan Video有非常棒的汉服和丰富的古风元素,但是在艺术意境的表现上略有不足,这个Lora正是为此而训练,希望会有人喜欢。 简单来说,这是一个使用我们今天(2025 年 1 月)拥有的资源和 Hunyuan 模型的图像到视频工作流程。使用 I2V LeapFusion Lora 加上 IP2V 编码,它可以非常一致,在我看来,在一致性方面与旧版 Kling 一样好。 Jan 3, 2025 · Objective. I merged the model with ~65 lora. 720x480 217 secs ( 3. Mar 9, 2025 · hunyuan video fast lora download link If you like this model, please 👍 it and leave a review! Also, feel free to give me a ⚡, it would be greatly appreciated. Whether you’re just starting out or looking to sharpen your skills, this guide will help you create more compelling, dynamic AI-generated videos. You should probably leave the strength at 1. Powered by Tencent Hunyuan If you enjoy my work please consider leaving a like or a review. 0 (Olsen in 2012) : Trained on Hunyuan Video fp8 with 512x512 px with 58 photos of Elizabeth Olsen in 2012 with detailed GPT-4 captions. Apr 11, 2025 · Technical details regarding "Uncensored" The model used for Hunyuan was based on llava-llama-3 8 billion parameter LLM. Download the Makoto Shinkai Anime Style LoRA model. Jan 17, 2025 · The trademark rights of “Tencent Hunyuan” are owned by Tencent or its affiliate. I'm having a lot of fun playing with the different settings, but for now this guide is for your very first run. embedded_guidance_scale= 5. linkedin. A LoRA file is typically ~100x smaller than the Hunyuan base model. Jan 20, 2025 · The Secret Sauce, trained on ( ≈ 5600 images) -This model's dimension is large, so combining it with other LoRA models may not be beneficial. Dec 20, 2024 · This is a LoRA for Yor Forger (Yor Briar) from "Spy x Family. If you don't like it, still let me know why so that I can improve! Jan 19, 2025 · Workflow for Hunyuan video that can generate a small resolution video first very quickly, then upscales it with Hunyuan v2v when you find one you like. For Apr 23, 2025 · Videos may leverage Hunyuan's t2v or Wan's t2v/i2v models and must be generated on Civitai (local generation provides too many potential advantages such as resolution, framerate, etc. This model is my first attempt at improving the realism details of Hunyuan videos. 1 for on-site video generation – both with LoRA support for more customizable results. Recommended prompt structure: Positive prompt: Jan 2, 2025 · ご存知の方も多いかも知れませんが、civitaiを見るとHunyuanVideoのLoRAが多数出ていました。 ものすごい勢いで、LoRAが増えている印象です。 今はHunyuanVideo界隈が熱い!!ようです。 ちなみにCivitaiのサイトでの表示方法については、右上のフィルターの部分からLoRAとHunyuanVideoを選択すると出てき . It works combined with avaible loras around. There is a third step for upscaling and video interpolation. This is a workflow for texturing an existing mesh with Hunyuan 3D, SDXL and ControlNet Mar 11, 2025 · HUNYUAN | All In One no need to buzz me . 17-Dec-2024 . You can get the fast fp8 model or the lora here and swap it from the original model. Before Jan 29, 2025 · HUNYUAN v1. Jan 10, 2025 · TIPS: 💥 Hunyuan 💥 the bomb you are sleeping on rn | Civitai I will divide notes into chapters and I'll write in the comme civitai. Dec 20, 2024 · This LoRA is designed specifically for the Hunyuan text-to-video generative model. The Intel vision tuned model was used to refine the tokenized model restoring over 5 million values. Tested on Hunyuan Video fp8 and Fast Hunyuan Video fp8 ! No keywords needed. Haiper. This was trained on various videos of people performing the same walking animation. com/logtd/ComfyUI-Hun Dec 4, 2024 · Fast Hunyuan. Mar 9, 2025 · This advancement allows you to run Hunyuan at resolutions that were previously impossible. Put it in ComfyUI > models > loras > Hunyuan. 1 (Best Quality also slowest, high Vram usage for Mar 15, 2025 · I created this workflow because I thought I had pushed the limits of what Hunyuan 3d-2 could do, and someone stepped in and proved me completely wrong. Dec 4, 2024 · HUNYUAN | All In One no need to buzz me . ComfyUI workflow for CivitAI John Wick LoRA. In my experience, it’s not too difficult to get everything working if you’re Dec 24, 2024 · Hunyuan Video有非常棒的汉服和丰富的古风元素,但是在艺术意境的表现上略有不足,这个Lora正是为此而训练,希望会有人喜欢。 Hunyuan Video features excellent Hanfu and rich traditional Chinese elements, but its artistic expression is somewhat lacking. They are ideal for specific video tasks, such as generating specific objects, styles, and characters. reddit. Browse hunyuan Stable Diffusion & Flux models, checkpoints, hypernetworks, textual inversions, embeddings, Aesthetic Gradients, and LORAs Mar 8, 2025 · UPDATE: https://civitai. Download the File Here. Put it in ComfyUI > models > vae. 3090 24 gb of ram. Google Colab. 97 frames, 24 fps, two LoRAs loaded. New workflows are already added to the pack. This model brings a seamless, straightforward approach to video creation, requiring minimal setup to achieve impressive results. It can sample high quality videos with 6-10 diffusion steps. Hardware. com/r/StableDiffusion/comments/1i4fkuk/hunyuan_vid2vid/ custom node: https://github. I believe that even their demos did not understand that their quality could be significantly better. 0-8. ). Some of these models may be duplicate uploads, and that's ok - this is primarily for internal use. You need a good GPU, I am testing with a NVIDIA 4080 16GB and sometimes works slow, also with 32GB I had some issues, better to have 48GB. Workflow isn't fancy, as I hate those workflows. CivitAI John Wick LoRA. Jan 23, 2025 · Hunyuan Video. I use it myself, if you have any questions you can leave me a message Apr 17, 2025 · We’ve just launched Hunyuan and Wan 2. Because of this, I fell through the rabbit hole in discovering what hy3d-2 was even capable of. 00. 6 days ago · HUNYUAN | All In One no need to buzz me . Feedbacks are much more appreciated. 0-12. Mar 24, 2025 · This workflow mainly optimizes images to NFSW Video,Added lora block node, optimized parameter settings, and reduced video artifacts, blur and other issues when using certain hunyuan lora. Use around Lora strength=1. Prompting helps a lot but works even without. FP8 CUSTOM MERGE! Example workflow for merging available as the 2nd download file. It focuses on creating more natural backgrounds without shallow depths of field, better skin texture, more realistic lighting, and overall increase in details in a scene. Dec 18, 2024 · With the native comfy implementation of Hunyuan I have tweaked the workflow to work for 12GB VRAM cards. trigger:noa \(blue archive\) prompt: Apr 27, 2025 · V 1. https://www. 6 minutes ) 960x544 362 secs (6 Dec 30, 2024 · The trademark rights of “Tencent Hunyuan” are owned by Tencent or its affiliate. Feb 19, 2025 · The trademark rights of “Tencent Hunyuan” are owned by Tencent or its affiliate. If you use my ComfyUI Colab notebook, put the model in AI_PICS > models > VAE. 0: Plus, we’ll take a closer look at some of the most popular video generation models available on Civitai, and explain the prompt styles that work best for each. It is a small model file that stores the changes to some parts of the Hunyuan base model. ThinkDiffusion-StableDiffusion-ComfyUI-Hunyuan. Performance on my workflow was already solid, but with this addition and with sage anttention enabled I managed to get these generation times. It takes about 8 minutes for a 4070Ti to run 20 steps. A faster Hunyuan model is released. Dec 28, 2024 · This workflow is intended to allow video generation with the incredible Hunyuan Model with 12gb vram (tested on 4070). Dec 23, 2024 · FastHunyuan LoRA is an Accelerator model. I perfer simple workflows you can easily adapt to your needs. | last update: 06/03/2025 ⬇️OFFICIAL Image To Video V2 Model is out!⬇️ Jan 22, 2025 · Note: This model card contains a number of Hunyuan related models required by Civitai for Hunyuan inference and LoRA training. com Apr 27, 2025 · HUNYUAN ALL IN ONE BASIC - FAST WITH UPSCALE T2V, I2V & V2V uses native Comfy nodes and offers three operation methods: T2V (Text-to-Video) I2V (Image-to-Video) In this method, an image is multiplied into x frames and sent to latent space with a balanced denoising level to preserve the structure, composition, and colors of the original image. " It is compatible with the Hunyuan video model, enabling the generation of high-quality videos or images. json. Google Colab Jan 20, 2025 · The open-source community has figured out how to run Hunyuan V2V using LoRAs. ;) !!!This workflow is Obsolete!!! Some better options: Wan2. Jan 24, 2025 · 此工作流程旨在允许使用具有 12gb vram 的令人难以置信的 Hunyuan 模型生成视频(在 4070 上测试)。工作流程并不花哨,因为我讨厌那些工作流程。我更喜欢简单的工作流程,您可以轻松适应您的需求。 使用此工作流程,您可以以较低 Dec 31, 2024 · さて、Deforumで遊んでいてスルーしていたのですが、最近ComfyUIに対応したHunyuanVideoは叡智な映像を生成出来る様です。 Civitaiのサイトを見てみると、それ系の映像ばかりアップされてます。 ちょっと凄すぎますね。 なぜHunyuanVideoは叡智なのが出来るのか、そこも気になりました。 下がCiviaiの Jan 19, 2025 · idea from this reddit: https://www. To do that, it takes the least common multiple "lcm" of the original and target fps (for example, for 30 and 24 it's 120), then it uses FILM to interpolate up to the lcm of the two numbers, and Dec 12, 2024 · This is a lora for Hunyuan Video generative text-to-video model. Hunyuan / Wan LoRAs can be used, so long as the result meets the contest criteria (PG/PG-13). cloudflare 301 Moved Permanently. com/articles/12268 <- getting started with hyvid official i2v . However the result may not be the same as using the fast Nov 14, 2024 · Haiper. ucldefs icee mcfksbn tymyfx tttp hohoff erjnun mlry eleq ncfu