- Neural Notes
- Posts
- The neural frames origin story
The neural frames origin story
How I kicked off an AI video synthesizer with minimal software engineering knowledge while travelling the world
Every startup has its origin story. Google and Apple have their garages, Amazon has its graffiti’d wall logo, and Airbnb had those collectible cereal boxes. neural frames? Well, it started with a flu.
After spending 11 years in the academic world of Physics - even though I always feared it wasn’t my true calling - and later leaving a corporate job (I hated salary negotiations so much that I quit), I decided to join a startup accelerator called Entrepreneur First in Berlin. The idea was simple: throw 50 like-minded individuals together for two months, help them form teams, and potentially invest in their new companies.
I tried different ideas, but none of them really sparked enough passion to keep me committed for the next X years. The program ended, I didn’t have a startup, and the stress of the past two months caught up with me. I got seriously ill - the kind of flu that glues you to the couch for days, with a broken inner thermostat.
But despite feeling miserable physically, my brain was still going in circles. I’d been exposed to all this energy about founding something, and I still very much wanted to do so. So I finally took the time to play with a new open-source AI model called Stable Diffusion, that so many people have been raving about. Other image-generation AIs existed (like DALL·E from OpenAI), but those were heavily regulated. With Stable Diffusion, you could just download it onto your own gaming PC and go wild. Luckily, I’d bought one out of boredom at my old corporate job - now it finally had a purpose other than playing Elden Ring and Kerbal Space Program!

By fine-tuning Stable Diffusion models, it became possible to create alternative versions of yourself.
I stumbled on a technique from the deforum community that allowed you to generate animations via Stable Diffusion. (For the techies: it’s basically a giant “image-to-image-to-image…” loop, applying transformations at each step.) I trained a model on a few images of myself, created a short animated clip, and had so much fun that I thought, “I can’t be the only person who’d enjoy this.”
The problem was, you more or less had to be a developer to create these animations. Yet I felt certain there were non-developers out there who’d create awesome stuff with the same tech. Whenever people asked, “Who’s your ideal customer?” or “How will you market this?” I didn’t know. Honestly, I also didn’t really care. I just wanted to build this. I had no real use case in mind—just the feeling that video generation is important and that this was a new way to do it. In other words, I started totally backwards: from the technology, not from a problem. I’ve been working hard to correct that ever since.
Then, right after ChatGPT first launched, I began building what would become neural frames - and ChatGPT was a huge help. I’m a physicist with no prior experience in React or cloud computing, so everything was brand new to me: From how do buttons on the frontend work, to how GPU’s could server a varying load of users. I was lucky enough to have AWS credits and a couple of late-night calls with Mike from AWS Support (Mike, if you’re out there, you’re a legend 🙏). He patiently walked me through all my questions, from target groups to autoscaling.

Taking notes while AWS-Mike is sharing autoscaling wisdom with me at 11:30 pm.
With his guidance and my own trial-and-error, I launched neural frames in the first week of January 2023. It immediately got some visibility on Reddit, then landed at #6 on Hacker News, giving me more traffic in a single day than I’d seen before or since. I just posted it on HackerNews without expecting anything and then went to have a Pizza with my friends. Randomly, I checked Google Analytics when my Capricciosa came and I said “Oh, guys, I think I need to take care of my servers”.

My original HackerNews post brought in something like 10,000 visitors on one day
One reason I think it resonated on Hacker News is that it looked…terrible. I had no clue how to make a real landing page, so it just read:
1. “Render your first frame”
2. “Render all your other frames”
3. “Export”
Along with an example video. Primitive, yes – but it surely didn’t look corporate, and it resonated with a certain group of people! One commentator on HackerNews compared the explanation on the landing page with this meme. Not entirely false, I would say.

And soon enough, I made my first internet money off of it.

With that small success, I took off on a global nomad adventure with my girlfriend - Colombia, Mexico, Sri Lanka, Thailand, Malaysia, Japan, the U.S. - all while coding non-stop. It was an exciting time, travelling far-away lands while building something new, the first year was probably one of the best years of my life. It got stressful at time, for instance when the infrastructure was down while I was boarding a plan to the Amazon rain forest. But we saw such awesome things all while me being tuned in with the digital breathing organism of users out there.

Infrastructure fixes while boarding the plane to Leticia.
After running neural frames alone for a year (supported strongly by my amazing girlfriend), I started hiring people, some of whom are still with neural frames today.
We’re a team of five now, spread out over the world. I am back in Berlin, trying to make it a home again, after two wild delocalized years. And I think the best is yet to come.
AI video of the week
I really love this series of art made by Safety Marc, a significant contributor to the deforum ecosystem. He is using the FLUX AI model with video input to create super smooth, beautiful animations.
Journey Through Flux #037
🎵 Sound on / full screen
— Safety Marc (@S4f3ty_Marc)
10:42 PM • Mar 7, 2025
What do you want to read about next week?