I have an endless curiosity for trying new things and technologies. AI might already feel like old news to many, but I’m still amazed by the insane pace at which it’s evolving. It’s funny how people instantly dismiss everything made with AI as “crap.” Sure, there’s a lot of crap—but like with any tool, the end result depends on who’s using it. It still takes skill and creativity to make something useful or interesting.
My attention span is short. I test something for a bit, then drop it and move on to the next thing. Knowing that, I have to act fast. I need to get things done quickly before I lose interest. No time to chase perfection. Just quick and dirty. And that’s fine—because if I can churn out 10% quality stuff in one day, I know I could make perfect stuff if I spent a week on it. But just knowing that is enough. I don’t need to prove it.
AI tools actually fit my workflow perfectly. Most of them are subscription-based, and many use a credit system where advanced features cost more. So I lean toward generating a lot of content quickly with the cheaper models. If I get decent results with those, I know the expensive ones would probably be a bit better—but not always. Sometimes the older models are actually better for specific tasks.
What I usually do is subscribe to a tool, cancel the subscription right away to avoid ongoing charges, and use it heavily for a month. Then I move on. I’ve done this with Midjourney, Suno, Kling, and Runway. Midjourney ran out of credits, so now I use free tools like Sora and Ideogram for image generation. That rock video I posted earlier? 100% done with Midjourney and Kling. The rap video was a mix. The latest one was made entirely with Sora and Runway.
The process is simple. AI video tools kinda suck at image generation and imagination. So I do the imagining myself. I plan the shots, use reference images, then describe what I want to Sora or Midjourney—something like “Make this guy a rock star.” I upload that result to Kling or Runway and prompt it with something basic like “Man is singing rock.” For the music, I use Suno (usually with lyrics I wrote and fine-tuned in ChatGPT), edit it in Audacity into 10-second clips, and then use lipsync tools to match the audio with the generated video. I bring everything together in Adobe Premiere. It’s not perfect, but it works. I’m happy with the results.
Honestly, I’m convinced that a year from now, the same people dissing AI-generated art will end up liking something without even realizing it was made with AI.