18
I finally saw what a real AI image model can do after trying Stable Diffusion 1.5 last year.
Back then, I tried to make a picture of a dog in a park and it looked like a blurry mess with six legs. I just tried the new SDXL model this week and the same prompt gave me a photo-realistic golden retriever sitting by a bench, with perfect lighting and shadows. The jump in quality in just over a year is crazy, all from better training data and model architecture. Has anyone else been shocked by how fast this part of AI is moving?
2 comments
Log in to join the discussion
Log In2 Comments
richard_kelly542d ago
Remember trying to make a simple portrait last year and the face would melt if you looked at it wrong. Now I can get a shot of a guy with stubble and individual eyelashes, proper pores and everything. It went from a fun toy to something that could actually fool you in under 18 months. Makes you wonder what the next year's model will be like, if it keeps going at this rate.
4
the_dylan2d ago
Just last year we were all laughing at the weird extra limbs and nightmare fuel... now I'm looking at pictures that could be from a magazine and wondering if my own eyes are real. The speed of this is kind of a sick joke, like the tech is actively mocking how bad it used to be. Pretty soon we won't be able to tell what's fake at all, and that's a weird thought to sit with. Guess the blurry six-legged dog era is over for good.
3