[ad_1]
Contrastive fashions like CLIP have been proven to be taught strong representations of photos that seize each semantics and elegance. To leverage these representations for picture technology, we suggest a two-stage mannequin: a previous that generates a CLIP picture embedding given a textual content caption, and a decoder that generates a picture conditioned on the picture embedding. We present that explicitly producing picture representations improves picture variety with minimal loss in photorealism and caption similarity. Our decoders conditioned on picture representations also can produce variations of a picture that protect each its semantics and elegance, whereas various the non-essential particulars absent from the picture illustration. Furthermore, the joint embedding area of CLIP allows language-guided picture manipulations in a zero-shot trend. We use diffusion fashions for the decoder and experiment with each autoregressive and diffusion fashions for the prior, discovering that the latter are computationally extra environment friendly and produce higher-quality samples.
[ad_2]
Source link