Trying to do everything that the tutorial and other commenters have suggested, but when I click the arrow button it just waits for a few moments and nothing is generated. Am I missing something?Reply
Nvidia should visualize a periodic 24 hour 3D landscape sweep of the chatter on social media platform for metaverse dive through and interactive engagement.Reply
I wonder if in a decade or so, large tech companies will unseat Disney, Warner Brothers etc as creators of animation movies.
While the results on Nvidia's website aren't too impressive, if you look at the history of animated movies , one can see how trivial and simplistic the art and animation was.
Having had some experience doing some research on GANs at university, I know them to be very powerful. What's very important to note is that the images generated my the model are truly "novel" i.e. completely fictitious. The images generated may be biased to some of the training data such as color and texture of the water and rocks for example, but every image is a fantasy of the model. The only way the model can generate such realistic images is because it has a very good abstract internal representation of what oceans, waves, rocks are.
Back at university, I pitched the idea to my professor of using GANs for generating "novel" images in real time while parents would read bed time stories to children. I didn't get very far. Glad to see some real progress in that direction.Reply
MtG dual-color lands serve as a good source of ideas on what to put in the textbox:
As for the UI: layout via tables?Reply
After trying 30 minutes I kind out understood some stuff. But the ui is legit anxiety inducer. I hope they can fix the ui to make it fun. Currently felt like using 80's DOS graphic software with so much manual input.Reply
The UI for the demo is atrocious, but that's probably because the text-to-image generation was glued to their existing AI painting tool.
I'd love for just the algorithm generation tool to be available for download. The web UI is clunky and just doesn't seem to work right.Reply
The UI is horribel .. they don't have one single ux person who have 2 hour to spare that could help out at nvidia ...Reply
I really don't have anything constructive to say. I think in general we're getting too soft on shitty things, so I'm going to be harsh.
I clicked through to the demo site ( http://gaugan.org/gaugan2/ ) and it was horrible.
The interface is clunky, slow, and confusing. I actually had to zoom out in my browser to see the whole thing. Had to click through a non-HTTPs warning. The onboarding tutorial is pretty bad.
I got a generic picture of the milky way for any prompt I tried ("rocks", "trees"). If you press Enter in the prompt field it refreshes the page.
This feels like a hackathon front-end hooked up to an intro to PyTorch webservice. It's only neat because, unlike the other 20 copies of this same project I've seen, it was the only one that didn't immediately throw its hands up and say "server overloaded, please wait."
If I'm meant to be impressed or fearful of "big data deepfake AI," this isn't it.Reply
Wow. I have a good degree of respect for Nvidia but this should never have been released in the state it is. Whos the product manager for this?Reply
It seems that the web UI was generated by AI too, because it's really hard to make sense of it.Reply
Are there any open-source models that can do similar type of landscape generation? I would really like to look at the code and try to understand how these things are built...Reply
The comments are overwhelmingly critical of the user interface, which is undoubtedly the weak part of this release, but I was still able to get some very impressive results.
An AI generated house on a lake: https://imgur.com/a/0wtVKum
I have found the best results come from uploading an image, then using the demo tools to get a segmentation map and sketch lines, then editing those as you desire. Changing the styling at the end also makes a big difference!Reply
What license are the produced images under? I could see this being used for cheap stock photosReply
I hear they have a miraculous new AI tool that magically determines your sexual desires then uses lasers to induce those feelings through your eyeballs with no contact necessary! Coincidentally demonstrated at this very same URL!
Even then, I don't think I'd care enough to fight through the layers of bullshit here.Reply
I am just getting very weird results that don't look at all like the one in the demo video. Here for example the image it gave me for "car in front of house" https://i.imgur.com/QdtrtCR.png
Or how about this one for "dog playing with ball" https://i.imgur.com/ldGLdwF.png
I have tried about a dozen different input phrases and every time I get these very strange results.Reply
I entered "kitten" and got typical surreal GAN output with disconnected topology, dozens of eyes, etc.
Edit: Looks like it was only trained on landscape images.Reply
Now, it's not an imagination if you can create a visual art using texts.Reply
To whomever came up with this name, good jobReply
I finally got through to the demo through three links and it's so busted in so many ways for me that I give up. Maybe it's stupid to try with my old netbook but I don't get any indication of whether I need a fancy graphics card for it to work or if it's running on my end. Anyway
-the screen zooms around disorienting for the tutorial and I get to congratulations you made your first image - there's nothing there.
-Exiting the tutorial, checking 'text' instead of 'segmentation' just immediately switches back after entry
-The whole site is a fixed width which is wider than my screen
- A red alert check box at the bottom confuses me about whether that's why it's not working etcReply