Qualcomm runs Stable Diffusion on Android Phone
According to Qualcomm, it has successfully shown Stable Diffusion, an artificial intelligence (AI) art generator, on an Android smartphone for the first time ever. The demonstration smartphone was powered by the new Samsung Galaxy S23 series’ Qualcomm Snapdragon 8 Gen 2 CPU. Qualcomm’s AI Art generator updates have led to a wider range of audience, building a growing community!
About Qualcomm
With its headquarters in San Diego, California, and its incorporation in Delaware, Qualcomm is an American multinational firm that develops chips, software, and services for wireless technology. The 5G, 4G, CDMA2000, TD-SCDMA, and WCDMA mobile communications protocols are all based on patents owned by Qualcomm.
Irwin M. Jacobs founded Qualcomm along with six other co-founders in 1985. It raised money for its early CDMA wireless cell phone development. This was done by selling the Omnitracs two-way mobile digital satellite communications system. The 2G standard, which integrated Qualcomm’s CDMA patents, was accepted following a contentious discussion in the wireless industry. Subsequently, there were several court battles over the cost of licensing patents called for by the standard.
Qualcomm’s AI Art Generator
Today, creative AI art that is competitive-caliber may be produced with just a few keystrokes instead of the usual talent and effort required for creating it. Thus, majority of the AI tools you’ve seen have been operating on strong cloud servers. Subsequently, Qualcomm just demonstrated what is feasible on a smartphone. According to the business, the Stable Diffusion art generator is as quick on a smartphone as it is on a potent desktop computer.
The Stable Diffusion model was scaled down by the company’s engineers using a process known as quantization to make it acceptable for use. Especially on a smartphone. It began with the FP32 version 1.5 release of Stable Diffusion. This was compressed to INT8 using the company’s most recent AI research (8-bit data space). It also improves the model’s performance and speed on mobile hardware while ensuring that its accuracy is maintained. Additionally, had to be done without the need for retraining thanks to innovations like Qualcomm’s Adaptive Rounding.
Stable Diffusion on Android
In a blog post, Qualcomm said that the Stable Diffusion model, which ordinarily needs a lot of processing power, can now produce images quickly and directly on a smartphone.
A deep learning model called Stable Diffusion can produce visual content from text inputs. It was created by Stability AI, a UK-based company, and made known in August.
In a demonstration video, Qualcomm demonstrated how the upgraded Stable Diffusion version 1.5 can produce a 512×512-pixel image in under 15 seconds.
The methodologies were applied to all of the Stable Diffusion component models. Subsequently, this was including the transformer-based text encoder, the VAE decoder, and the UNet, claims Qualcomm. For the model to fit on the gadget, this was essential.
Further, Qualcomm said that it assembled the neural network into a software that could successfully run on a smartphone using its AI Engine direct architecture.
Qualcomm is not the only company to experiment with AI art generation on mobile devices. Apple also updated its software in December to enable Stable Diffusion to operate locally on iPhones using the Draw Things app and Core ML. This is the company’s exclusive machine learning framework.
In Apple’s demonstration, a 512×512 image was created in roughly a minute. This is four times faster than a smartphone using Qualcomm’s processor.
The optimizations established for Stable Diffusion’s effective operation on mobile devices may also be mimicked on computers and any other Qualcomm-powered device.
Follow us on Instagram @artzoneai for more