Stable Diffusion update removes the ability to copy Artist Styles or make NSFW works

Stable Diffusion, the AI that can produce startlingly accurate visuals from text, has been enhanced with a number of new features. According to The Verge, many users are dissatisfied with the Stable Diffusion update. This is because it can no longer produce images in specific artists’ styles or NSFW artworks.

There are several new features in version 2. The Stable Diffusion Update has a new text encoder called OpenCLIP, which, in the words of Stability AI, “greatly enhances the quality of the generated images compared to earlier V1 releases.” It also has a brand-new NSFW filter from LAION that is intended to filter out adult material.

What is Stable Diffusion?

A text-to-image model using deep learning called Stable Diffusion was released in 2022. It can be used for other tasks like inpainting, outpainting, and creating image-to-image translations guided by text prompts. However, its primary use is to generate detailed images conditioned on text descriptions.

Stability Diffusion Update
Stable Diffusion Update

The CompVis group at LMU Munich created Stable Diffusion. It is a latent diffusion model that is a type of deep generative neural network. With assistance from EleutherAI and LAION, Stability AI, CompVis LMU, and Runway collaborated to release the model. In an investment round led by Lightspeed Venture Partners and Coatue Management in October 2022, Stability AI raised US$101 million.

The code and model weights for Stable Diffusion have been made available to the public. Additionally, it can function on the majority of consumer hardware that has a modest GPU and at least 8 GB of VRAM. This represented a change from earlier proprietary text-to-image models that were only accessible through cloud services, like DALL-E and Midjourney.

The New Stable Diffusion Update

A depth-to-image diffusion model is one of the additional qualities that, in the words of Stability AI, enables the creation of transformations. These seem significantly different from the original yet still keep the coherence and depth from an image.” To put it another way, if you alter an image, items will continue to display correctly. This is either in front of or behind other objects. Finally, a text-guided inpainting model makes it simple to replace certain components of a picture, such as preserving a cat’s face while replacing its body.

However, the upgrade now makes it more difficult to produce specific types of photographs. These include photorealistic images of celebrities and output that is narcissistic or pornographic. It also includes images that are in keeping with the aesthetic of specific artists.

Because it is open source and extensible, Stable Diffusion has gained popularity for producing AI art. This is significantly more than its competitors like DALL-E, which are closed models. For instance, the YouTube VFX channel Corridor Crew demonstrated an add-on called Dreambooth. It enabled users to create graphics using their own images.

By studying the work of artists like Rutkowski, looking at the pictures, and looking for patterns, Stable Diffusion can imitate them. According to the license agreement with Stable Diffusion, people are not permitted to use the model in a way that violates any laws.

Rutkowski and other artists have protested the use in spite of this. Rutkowski told MIT Technology Review, “I probably won’t be able to find my work out there because [the internet] will be saturated by AI art.”

Follow us on Instagram: @niftyzone