AI image generation tools are now capturing the attention of developers around the world, and it was only a matter of time before someone could combine the magic of AI image generation with the immersion of VR.
That’s what Scottie Fox does with Stable Diffusion VR. (opens in new tab), an immersive experience that brings AI image generation into a 360° space. No need to rent a huge server to handle everything, no need to rely on consumer grade PC hardware. The Stable Diffusion Image Generator puts the user in the midst of an ever-changing dream world.
It’s far from finished, but it’s working great even in the experimental phase.
I talk to Scotty about what it’s like to work on cutting-edge projects at the intersection of technology and art. Not only did he give us some insight into the project as to how it came about, but he also sprinkled in some insight into the future of AI image generation when it comes to gaming.
Scotty tells us how the muses got the inspiration for the project in the middle of the night. When asked for details, he talks all about the awkwardness brought on by the night’s epiphany moments.
“Unfortunately, there’s a ‘bulb’ floating overhead and that ‘Eureka’ moment can happen at any moment…the office lights in the evening,” he says. “After an hour of restless sleep, I literally woke up with a very simple and promising solution to the battle I had fought all day. We tested the theory in practice.” Eureka! ‘ It wasn’t just a dream. “
It’s nice to see developers acting on their fantasies.It’s kind of terrifying how many great concepts can be left by the wayside because people put things like sleepy Ahead of progress. Thankfully Scottie is no such person and has been experimenting with the project enthusiastically ever since.
When asked about the iteration of his Stable Diffusion VR project, he said: It’s only recently that consumer-grade hardware has been able to perform tasks as resource-intensive as real-time rendering. “
The hardware he uses to develop Stable Diffusion VR includes an RTX 2080 Ti graphics card backed by an AMD Threadripper 1950X. (opens in new tab)That’s a fair amount of computing power, and while you may be shrugging off a 20-series graphics card, remember that until recently this was the best GPU money could buy.
All things considered, it’s amazing to see consumer-grade hardware handle the heavy loads associated with both artificial intelligence and virtual reality. Scottie explains:
“One of the biggest challenges I faced in this project was diffusing content in a seamless and continuous way (a heavy task) so that I could visualize an immersive environment.
“Even with today’s cloud-based GPU systems and high-end commercial runtimes, trying to ‘render’ a single frame takes seconds or minutes, which is unacceptable for real-time display. Perfect diffusion of repetitive content in a 360-degree environment. That was my first goal, and it was exactly what I struggled with the most. “
As we’ve seen in many of our project videos, it’s clear that Scottie has mastered the power hurdle. He was able to give us all this amazing software without having to rent a ton of server space to handle it all.
To do this, Scottie had to “break the whole process into small pieces and schedule them to spread out.” Each blue square you see in the video below is a section queued for processing in the background.
Stable Diffusion VR real-time immersive latent space. ⚡️Add debugging capabilities via TouchDesigner. Dispersing small pieces into the environment saves resources. Tools used: https://t.co/UrbdGfvdRd https://t.co/DnWVFZdppT#aiart #vr #stablediffusionart #touchdesigner pic.twitter.com/TQZGvvA5tHOctober 13, 2022
“These pieces are sampled from the main environment space and, when complete, are blended back into the main view,” Scottie said. “This saves time by not having to render and diffuse the entire environment at once. As a result, it’s slowly evolving seamlessly while there’s a lot going on in the background.” You get a view that looks like
It’s a very elegant solution, the effect is appealing, and less likely to be overwhelming than shifting the entire scene at once. While this is a much more efficient approach, Scottie’s next step is to “revolve around consumer-grade hardware viability and modular capabilities for integration with other software and applications.” is.
Speaking of integration, Scottie talks about his journey of connecting with different developers at summits and more. This is very important for anyone looking to develop a project like this. By learning and testing different approaches that other developers have tried, Scottie has formed a strong foundation for the Stable Diffusion VR project.
“As a creator who has explored a lot of 3D rendering and modeling software, I wanted more. derivatives (opens in new tab) and its product TouchDesigner. For interactive art installations, this platform has become the industry standard among other tools. It has an amazing ability to transliterate between protocols and development languages, making it useful for integrating different forms and styles of parallel software. “
wild! I will do my best to achieve my goals! Lots of testing work today. Stable diffusion in VR + touchdesigner = real-time immersive latent space. This proof of concept is the future! #aiart #vr #stablediffusion #touchdesigner #deforum pic.twitter.com/Qn5XWJAO7ZOctober 7, 2022
“The other half of my inspiration is deformation (opens in new tab)— an incredible community of artists, developers, and supporters of the art generated by the dataset. There, he says, “you can share your struggles, celebrate your successes, and be inspired in ways you personally simply cannot.”
The next big challenge for anyone wanting to use Stable Diffusion is copyright issues. “Currently, there are many licenses and legal aspects to leveraging datasets and creating art from them.These licenses are open to the public, but I would not redistribute them as my own. I don’t have the right.”
It’s a disturbing notion, causing companies like Getty to ban AI-generated images (opens in new tab)However, Shutterstock appears to be going in a different direction by asking artists to be paid for contributing to training materials for its upcoming AI image generation tool. (opens in new tab).
Scottie clarifies, “It takes a lot of testing and publishing to get more than just a demo out into the hands of people who really enjoy it, from their PCs in the office.”
“I’ve been approached by so many developers who are interested in my project.”
Some of the wilder ones include “crime scene reconstruction tools that allow witnesses to dictate and ‘construct’ visual memories to be used as evidence in court.” Another case involves a treatment environment that can be customized by a trained healthcare provider. Adjust the patient’s experience based on the treatment that the patient sits properly. “
There are plenty of possibilities for projects to go in non-game directions.
Scottie says that when it comes to working with AI in the future, “successful companies will combine current technology with AI content to create hybrid game genres. We will have technology fluidity.” He considers the possibility that his in-game AI will be used in bug testing, personalized escape his room experience, horror and his RPG monsters that evolve with each game.
On that horrific note, I leave you with a few inspiring words from himself. Remember As a former art student, I agree.
It’s also important to know that “even the most creative minds hit obstacles.” So keep developing. And don’t ignore the muse when it strikes in the middle of the night.