Iterate to Succeed. Part 3 – AI Avatar Workflow
Have you seen Part 1 and Part 2 of this saga of learning how to train a Stable Diffusion model locally? If not, there may be some gaps in what I explain here, but I will give as much context as needed in this post. If you want the full background of how I made it to this point and gotchas to avoid, I suggest reading those posts first. I’ll be going into detail on each line of the code I share. If you’re a lifelong Python dev or familiar with ML training already then much of this will be over explained. If you want to skip all this and don’t care about…
Iterate, Fail, Part 2: The deep dive (aka the pain)
This is a continuation of my learning from Part 1, it will make more sense if you start with that post first. After all the discovery I did in part 1, I was feeling very confident that I could get everything up and running locally. My first step was to get Stable Diffusion setup and be able to generate images locally using prompts on a standard model. Once I was confident I could generate decent quality images, I would move onto DreamBooth and use that to train my face into the model. Simple two-step process, right? Shouldn’t take long at all. I started with some tutorials that relied heavily on CompVis/stable-diffusion to generate…
Iterate, Fail, Iterate…
What an adventure this last week has been! I’ve seen mentions of AI generated art in many of the publications I read. I’ve seen friends bringing AI generated art to tabletop games to show off their character. However, my first direct interaction with AI art was through the iOS app by the name of Lensa. With this application you can upload 10-15 images of your face and have a bunch of avatars generate for you. These avatars now grace my Gmail, LinkedIn, and other social media accounts. The developer in me wasn’t happy. I couldn’t let this black box just run on some cloud server and give me results. I had to know…