Hardeep Gambhir
26 March 2026
engineering
engineering
Why ComfyUI is becoming core AI production infrastructure: pipeline-first workflows, node ecosystems, and studio-grade adoption signals.
By
Hardeep Gambhir
17 March 2026



ComfyUI raised 17 million dollars, has over 106,000 GitHub stars, and VFX supervisors who worked on Wolf of Wall Street and Hereditary are building ComfyUI courses now. It's quietly becoming the backbone of every serious AI production pipeline. The Houdini parallel is real: node based, procedural, reproducible, pipeline friendly. Production companies and studios care about pipeline way more than they care about which model generates the prettiest clip.
But the interesting thing here isn't the funding or the star count. It's who is showing up.
Eran Dinur is an Emmy and VES Award winning VFX Supervisor. His credits include Iron Man, Star Trek, and Transformers at ILM. He ran VFX on Wolf of Wall Street, Hereditary, Boardwalk Empire, The Greatest Showman, Uncut Gems, Ad Astra, and The Lighthouse at Brainstorm Digital. He wrote the literal textbook on VFX filmmaking, published by Routledge. He's currently supervising VFX on A24's Marty Supreme.
In March 2026, Dinur launched a full course on fxphd teaching VFX professionals how to build custom ComfyUI workflows. Not hobbyist stuff. Production workflows for texture generation, matte painting, CG pass compositing, and 3D model creation, integrated with Maya, Nuke, and render passes including Z depth, normals, and Cryptomattes.

Victor Perez, VFX Supervisor on The Dark Knight Rises, Rogue One, and multiple Harry Potter films, is selling a six month ComfyUI course for £410 with a dedicated Discord community. Doug Hogan, an 18 year VFX compositor, launched a 15 module ComfyUI for VFX course on ActionVFX Academy. ActionVFX's promo material included a line that should make anyone in this space pay attention: large studios they work with are considering hiring hundreds of ComfyUI artists in the near future.
These are not people who chase hype. These are people who spent decades in pipeline heavy production environments. When they commit months of their time to building training material for a tool, it means they've already decided it's going to be part of the workflow.
Dinur is also an adjunct professor at the School of Visual Arts in NYC. An educator at Full Sail University described ComfyUI's approach as aligned with node based industry standards like Nuke and Houdini. When universities start teaching a tool, it means the next generation of artists will learn it as default. That's a different kind of adoption than Reddit popularity. It's generational lock in.
And the Reddit popularity is there too. 53,000 Discord members on the official Comfy Org server. 148,000 subscribers on r/comfyui. The broader r/StableDiffusion community, over 750,000 members, treats ComfyUI as the default tool for anything beyond basic generation. Comfy Org claims millions of users on their About page. Even discounting that for marketing, the numbers are big and growing fast.
Houdini 1.0 shipped on October 2, 1996 from SideFX, a Toronto company. It cost $9,500. The first film to use it was Jingle All the Way.
The criticism was immediate and lasted years. Too technical. Steep learning curve. Unfriendly interface. Why learn a node based procedural system when Maya lets you sculpt things directly? Studios were comfortable with their existing tools. Nobody wanted to retool.
It took Houdini roughly 22 years to go from release to receiving the Academy Award of Merit, the most prestigious technical Oscar, in 2018. Along the way, the free Apprentice tier (launched 2002) slowly built the user base. Major simulation tools like PyroFX for fire and smoke and FLIP for fluids proved that procedural approaches could do things hand animation couldn't. Disney used it heavily on Frozen. By 2015 it was a must know tool for effects work.
Today Houdini is at every major VFX house on the planet. ILM, DNEG, MPC, Framestore, Weta, Pixar, Disney, Sony Imageworks. It won because the node based procedural philosophy turned out to be exactly what production environments needed: reproducibility, automation, and the ability to build reusable tools that other artists could pick up without starting from scratch.
The resistance pattern was identical to what you see with ComfyUI today. Too complex, too steep, why bother when simpler options exist. But the studios that adopted Houdini early built better pipelines. And pipelines win.

Here's the part of the story that doesn't get told enough.
comfyanonymous, the creator of ComfyUI, describes himself as a "boring software engineer" with a web development background. He had no PyTorch experience before October 2022. None. He was previously involved with Stability AI through 2023, but his background was web dev, not machine learning, not computer vision, not VFX.
His first commit to ComfyUI was January 2023.

By March 2026, his tool has 106,000 GitHub stars, 17 million in venture funding, NVIDIA engineering custom GPU optimizations for it, SIGGRAPH running official workshops on it, and Emmy winning VFX supervisors building courses around it. Netflix and Amazon Studios teams use it. The Russo Brothers' studio is hiring for it. SideFX, the company behind Houdini, built a bridge product to connect to it.
The creator is still pseudonymous. Nobody outside the company knows his real name. He still personally implements support for major new AI models, often within hours of release. When Black Forest Labs dropped FLUX in August 2024, ComfyUI had day zero support because he built it himself.
There's something worth sitting with there. The most important piece of AI production infrastructure wasn't built by a research lab or a VFX software company or a team of PhD machine learning engineers. It was built by a web developer who decided to learn PyTorch in late 2022 and started committing code two months later. His co founders are Yoland Yan, a former Search ML engineer at Google and Chromium committer who built ComfyCLI, and Robin, a former Google Cloud engineer who created the Comfy Registry. The organization page says "Our organization is very flat and there are no titles. The only thing that matters is the quality of your ideas and execution."
When he was asked about the interface being difficult to learn, his response was: "Everyone is trying to make easy to use interfaces. Let me try to make a powerful interface that's not easy to use."
Their stated mission: "We are not building a walled garden. We are building the OS of creative AI."
In October 2024 they shipped the Desktop app. One click installer, Electron wrapper, uv handling all the Python dependency management. No command line. No conda environments. No YouTube tutorial just to get the thing running. This matters more than it sounds. The single most important thing SideFX ever did for Houdini's adoption was launch the free Apprentice tier in 2002. It removed the $9,500 barrier and let students and hobbyists build fluency before they ever entered a studio. ComfyUI's desktop app is the same move. Drop the friction to zero and let the user base compound.
That's not arrogance. That's a design decision. And it's the same decision SideFX made with Houdini thirty years ago.
Here's the argument most people miss when they talk about AI image and video generation. They compare outputs. Which model makes the best looking image. Which tool has the nicest default settings. Wrong conversation.
In production, nobody cares which tool generated the prettiest single frame. They care about reproducibility. Whether another artist can pick up where someone left off. Whether the workflow can be version controlled, automated, and run at scale without someone clicking buttons.
ComfyUI's architecture is built for this. Every workflow is a JSON encoded directed acyclic graph. When you generate an image, the complete workflow gets embedded as metadata in the PNG. Drag that image back into ComfyUI and the entire node graph recreates itself. That alone is worth more to a production team than any quality improvement in any model.

The execution engine topologically sorts the graph and caches every node's input hash. Change one parameter and only that node and its downstream dependencies re execute. For a 20 node pipeline with expensive model loading, this is the difference between a 5 minute iteration and a 30 second one.
The API layer runs on port 8188, accepts full workflow JSON via HTTP POST, queues multiple jobs, and provides real time progress over WebSocket. This means ComfyUI can run headless as production infrastructure. Batch processing, A B testing, automated content generation. Platforms like Replicate and Cerebrium already offer hosted ComfyUI API endpoints because the demand is there.
The parallel to Houdini's Digital Assets is the ComfyUI Subgraph: complex node combinations packaged into reusable single nodes that expose only essential controls while hiding internal complexity. This is how pipeline technical directors build tools for artists. It's the same pattern that made Houdini indispensable.
Luca Pataracchia at SideFX put the procedural philosophy clearly: thinking procedurally means you think about the problem ahead of time, about where you want to end up and the steps to get there. That's the exact mindset ComfyUI rewards. And it's the exact mindset production demands.
According to Comfy Org's careers page, ComfyUI is used by teams at OpenAI, Netflix, Amazon Studios, Ubisoft, EA, and Tencent. These are self reported claims on formal job listings where misrepresentation carries legal risk, and the $17 million from Pace Capital, Chemistry Ventures, Abstract Ventures, and angel investor Guillermo Rauch (founder of Vercel) adds credibility.
The job postings tell a clearer story. Harbor, an NYC post production company, is hiring an AI Animator/Artist requiring ComfyUI skills at $66K to $101K. That's a real salary range for a job title that did not exist two years ago. AGBO, the Russo Brothers' studio, listed Creative Developer roles alongside ComfyUI positions. Sawhorse Productions, whose clients include Walmart, Google, and NBCUniversal, is hiring an AI Filmmaker requiring ComfyUI experience. PFX posted a dedicated "ComfyUI Artist" role. When recruiters start writing job descriptions around a specific open source tool, the adoption question is already settled.
Magnopus, the studio behind The Wizard of Oz at Sphere, published a technical blog about deploying ComfyUI in a shared production environment using centralized network drives for models and custom nodes. The kind of pipeline engineering detail that signals genuine production use, not experimentation.
NVIDIA is a direct partner now. At GDC 2026, they announced 2.5x faster generation and 60% lower VRAM on RTX 50 Series GPUs specifically for ComfyUI. RTX Video Super Resolution is available as a native ComfyUI node for 30x faster 4K upscaling. At SIGGRAPH 2025, ComfyUI had an official three hour hands on workshop.
And then there's this: SideFX themselves, the company that makes Houdini, hosts talks about the Houdini ComfyUI Bridge. A dedicated plugin that integrates ComfyUI's AI rendering directly into Houdini's node graph. The two tools aren't competing. They're converging.
Coca Cola's Holidays Are Coming campaign was built with ComfyUI by Silverside AI. Multiple Corridor Crew productions run on it. Chase Jarvis, founder of CreativeLive, stated that many of the jaw dropping images people see on social media from brands like Coca Cola, Puma, and Salesforce have a good chance of having been made with ComfyUI.
To understand why ComfyUI's growth matters, you need to understand what came before it. AUTOMATIC1111, or A1111, is the open source web interface that most people used to run Stable Diffusion locally on their own machines. It's a Gradio based app. You type a prompt into a text box, adjust some settings with sliders and dropdowns, and hit generate. It's how millions of people first experienced AI image generation outside of cloud services like Midjourney or DALL E. At roughly 145,000 GitHub stars, it's still the most starred Stable Diffusion project. It works fine for single image generation. Type a prompt, get a picture.
But that's also its ceiling. A1111 is a prompt box with settings. It has no node graph, no visual workflow, no way to chain multiple models or conditioning steps together in a single pipeline. For basic use it's faster to learn and easier to run. For anything resembling production work, it hits a wall fast.

ComfyUI's 106,000 GitHub stars place it second to A1111, but the trajectory is what matters. ComfyUI gained 40,900 stars during 2024 alone, ranking number 4 on the Runa Capital ROSS Index of fastest growing open source startups. It went from about 21,000 stars at end of 2023 to 62,000 at end of 2024 to 106,000 by March 2026. That's 5x growth in two years.

A1111 development has stalled on modern models. It doesn't support FLUX in its main repository. Speed benchmarks on identical hardware show ComfyUI generating 768x1024 SDXL images in 16 seconds versus A1111's 27 seconds, a consistent 2x performance advantage. And A1111's linear Gradio interface simply cannot express complex multi model pipelines. It was built for a different era of this technology, when running Stable Diffusion 1.5 locally was the whole point. The field moved past that.
InvokeAI, the main competitor targeting professional studios, runs roughly 20 seconds per iteration versus ComfyUI's 3.2 seconds on FLUX workloads. More out of memory errors on limited VRAM, smaller ecosystem.
Where ComfyUI really pulls ahead is speed of model support. When Black Forest Labs released FLUX in August 2024, ComfyUI had day zero support with built in templates. NVIDIA, Black Forest Labs, and ComfyUI collaborated directly on the launch. Community developers, particularly kijai, frequently add experimental model support even faster than the core team. No other interface consistently matches this speed.
The model range covers basically everything that exists. Every Stable Diffusion variant, FLUX 1 and 2, PixArt, Hunyuan, Lumina, Chroma. Video generation via AnimateDiff, Mochi, LTX Video, Hunyuan Video, Wan 2.1 and 2.2, CogVideo. Audio. 3D. If a model drops today, ComfyUI will probably support it by tomorrow.

The custom node ecosystem is what makes ComfyUI nearly impossible to compete with. There are roughly 1,500 to 2,500 custom node packages, each containing anywhere from a handful to hundreds of individual nodes. The WAS Node Suite alone ships over 300 individual utility nodes. Impact Pack, Essentials, ControlNet Aux, each contain 50 to 100 plus. The "10,000 nodes" number is plausible when you count individual node types across all packages.
The ComfyUI Registry at [registry.comfy.org](https://registry.comfy.org/) launched in early 2025, replacing manual GitHub curation with semantic versioning and security scanning. ComfyUI Manager, with over 10,400 stars, functions as the ecosystem's app store. Search for a node, one click install, dependency handling, restart. When you load someone else's workflow and it has nodes you don't have installed, Manager detects and offers batch installation. That's workflow portability, which is critical for production teams.
The most impactful packages tell you where the tool is going. AnimateDiff Evolved turns ComfyUI into a video production engine. Impact Pack handles face retouching and segmentation at production quality. IPAdapter Plus enables reference image based style transfer for brand consistency across campaigns. The 3D Pack brings gaussian splatting, NeRF, and mesh texturing into the visual workflow. AdvancedLivePortrait does real time portrait animation. LLM integration tools connect GPT, Ollama, and Gemini directly into node graphs.
The ecosystem extends ComfyUI into domains the original creator never planned for. That's the network effect. Every new node package makes switching harder. It's Unity's Asset Store logic, Houdini's Digital Assets logic, applied to generative AI.
But open ecosystems have open risks. In June 2024, a compromised extension called ComfyUI\_LLMVISION was distributed through the community and stole user credentials. The Nullbulge incident was a real supply chain attack on a real user base. It's the kind of thing that happens when an ecosystem gets big enough to be worth attacking. The Registry's security scanning and semantic versioning, launched in early 2025, was partly a response to exactly this. Growing pains that come with actually mattering.
The learning curve is real and severe. Community consensus puts A1111 competency at 1 to 3 days versus ComfyUI at 2 to 4 weeks. This is by design. We already covered what comfyanonymous said about it. He meant it.
"Node spaghetti," workflows with dozens of nodes and hundreds of intersecting connections, is a frequent target of community ridicule. GitHub Issue 1132 documents the tool getting mocked on social media for overly complicated workflows. It's a fair criticism. Complex workflows look intimidating and the visual language can become genuinely unreadable without careful organization.

Breaking updates are a real production concern. One user documented spending three days getting previously working workflows functional after updates broke compatibility with popular node packs. Error messages are developer oriented. A simple missing model file produces a PyTorch stream reader error that means nothing to a non programmer. CUDA out of memory errors account for roughly 80% of beginner support requests. Mac performance on Apple Silicon lags significantly behind NVIDIA hardware.
Every single one of these criticisms could have been copied from Houdini forum posts circa 2005. Too technical, too steep, designed for TDs rather than artists, comfortable alternatives that were good enough for basic work. Houdini had all of the same problems. Houdini won anyway. Production environments don't optimize for the easiest tool. They never have.

This isn't speculation. It's a pattern that has repeated across every creative software domain.
Nuke owns compositing. 78% of VFX studios prefer its node based workflow over layer based alternatives like After Effects. Substance Designer owns procedural texturing, used by 85% of AAA game studios. Unreal Engine Blueprints replaced traditional scripting with node based visual programming. Blender's Geometry Nodes brought Houdini style procedural power to open source. DaVinci Resolve Fusion has been node based since 1987. TouchDesigner was literally built from Houdini's codebase by SideFX co founder Greg Hermanovic.
In every domain where workflows need to be reproducible, shareable, automatable, and pipeline integrated, node based tools win. Generative AI was never going to be the exception.
ComfyUI is three years old. Houdini took 22 years to get its Oscar. The timeline will compress dramatically because of open source distribution, zero licensing cost, and the speed at which the AI space moves. But the adoption pattern is the same.
The company is pre revenue but monetizing through Comfy Cloud and partner API nodes. What's interesting is how they built the team. They didn't recruit from big tech hiring pipelines. They absorbed the people who were already building the ecosystem. [Dr.Lt.Data](https://dr.lt.data/), who created ComfyUI Manager and Impact Pack, is now on staff. So is pythongosssss, one of the biggest contributors to the frontend. Kosinkadink, who built AnimateDiff Evolved. Alex Goodwin, former ML engineer at Stability AI. The community didn't just grow around the tool. The community became the company. That's a very specific kind of organizational gravity that's hard to replicate.
Pixar built internal tools to make their specific animation pipeline faster. Every serious studio does. As AI models keep getting more powerful, the winners won't be the people with access to the best model. Everyone will have access to the best model. The winners will be the people who can use all of these tools in the most efficient way. Who build internal workflows for their specific use cases. Who constantly refine their pipeline to be faster, more reproducible, more automated.
That's what ComfyUI enables. That's what Houdini enabled. Different tools, same logic. And if the pattern holds, the people learning ComfyUI right now, while most of the industry is still comparing model outputs in Discord, are the ones who'll be running the pipelines everyone else depends on in five years.

Download the Desktop app. Don't start with the command line installation. The desktop version handles Python, dependencies, and model paths for you. It's free.
Start with the default workflows that ship with the app. Text to image first. Get comfortable with the node graph. Understand what each node actually does rather than just copying workflows from YouTube. The temptation to download someone's 40 node workflow and run it without understanding it is strong. Resist that for the first two weeks. Build simple workflows yourself. Break them. Fix them. That's where the fluency comes from.
Then install ComfyUI Manager and start pulling in custom nodes for whatever your actual use case is. If you're in VFX, ControlNet Aux and Impact Pack first. If you're doing brand work, IPAdapter Plus for style consistency. If you're exploring video, AnimateDiff Evolved.
Expect two to four weeks before you feel competent. Expect the interface to feel hostile at first. That's normal. Houdini artists went through the same thing. The people who pushed through built careers on it.
We're LocalHost. Last year we ran the Mumbai AI Film Festival at the Royal Opera House, where over 1,200 teams applied, 15 were flown in from across the world, and 14 AI short films premiered on the red carpet in front of 600 people, judged by directors like Ram Madhvani and Shakun Batra, with Tanmay Bhat, Ritesh Deshmukh, and teams from Netflix India and Google in attendance. These were some of the biggest names in Bollywood. 80% of the attendees left their traditional jobs to work on AI Film adjacent fields, with job offers from top studios. In February 2026 we followed that up with the India AI Film Festival at Qutub Minar (UNESCO World Heritage site) during the India AI Impact Summit, in collaboration with the Government of India and sponsored by NVIDIA, screening films for 300+ investors, policymakers, and AI leaders from around the world. This year we're going global: five more AI film festivals in Los Angeles, San Francisco, Paris, Tokyo (in collaboration with the Tokyo Metropolitan Government), and Mumbai. If you're making things with these tools, interested in collaborating, shoot me a DM or email me at hardeep\[at\]localhosthq\[dot\]com Our team is young and lean, we move fast. Mumbai AI Film Festival was pulled off end-to-end in 25 days because I needed to get to Canada for my driver's license test. India AI Film Festival in Delhi was pulled off in 17 days off. Join us if you're driven and want to be the pioneers in AI Filmmaking space. We are a global team with deep connections in USA, Japan, India and Europe. We have a writing culture in the company and recently raised a round. - Hardeep & Sanchay

Applications are reviewed on a rolling basis. We back young people from all backgrounds, regardless of credentials.

