Translations:Diffusion Models Are Real-Time Game Engines/28/en: Difference between revisions

    From Marovi AI
    (Importing a new version from external source)
     
    (Importing a new version from external source)
     
    Line 1: Line 1:
    We re-purpose a pre-trained text-to-image diffusion model, Stable Diffusion v1.4 (Rombach et al., [https://arxiv.org/html/2408.14837v1#bib.bib26 2022]). We condition the model <math>f_{\theta}</math> on trajectories <math>T \sim \mathcal{T}_{agent}</math>, i.e., on a sequence of previous actions <math>a_{< n}</math> and observations (frames) <math>o_{< n}</math> and remove all text conditioning. Specifically, to condition on actions, we simply learn an embedding <math>A_{emb}</math> from each action (e.g., a specific key press) into a single token and replace the cross attention from the text into this encoded actions sequence. In order to condition on observations (i.e., previous frames), we encode them into latent space using the auto-encoder <math>\phi</math> and concatenate them in the latent channels dimension to the noised latents (see Figure [https://arxiv.org/html/2408.14837v1#S3.F3 3]). We also experimented conditioning on these past observations via cross-attention but observed no meaningful improvements.
    We re-purpose a pre-trained text-to-image diffusion model, Stable Diffusion v1.4 (Rombach et al., [https://arxiv.org/html/2408.14837v1#bib.bib26 2022]). We condition the model <math>f_{\theta}</math> on trajectories <math>T \sim \mathcal{T}_{agent}</math>, i.e., on a sequence of previous actions <math>a_{< n}</math> and observations (frames) <math>o_{< n}</math> and remove all text conditioning. Specifically, to condition on actions, we simply learn an embedding <math>A_{emb}</math> from each action (e.g., a specific key press) into a single token and replace the cross attention from the text into this encoded actions sequence. In order to condition on observations (i.e., previous frames), we encode them into latent space using the auto-encoder <math>\phi</math> and concatenate them in the latent channels dimension to the noised latents (see Figure [https://arxiv.org/html/2408.14837v1#S3.F3 3]). We also experimented conditioning on these past observations via cross-attention but observed no meaningful improvements.

    Latest revision as of 03:06, 7 September 2024

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Diffusion Models Are Real-Time Game Engines)
    We re-purpose a pre-trained text-to-image diffusion model, Stable Diffusion v1.4 (Rombach et al., [https://arxiv.org/html/2408.14837v1#bib.bib26 2022]). We condition the model <math>f_{\theta}</math> on trajectories <math>T \sim \mathcal{T}_{agent}</math>, i.e., on a sequence of previous actions <math>a_{< n}</math> and observations (frames) <math>o_{< n}</math> and remove all text conditioning. Specifically, to condition on actions, we simply learn an embedding <math>A_{emb}</math> from each action (e.g., a specific key press) into a single token and replace the cross attention from the text into this encoded actions sequence. In order to condition on observations (i.e., previous frames), we encode them into latent space using the auto-encoder <math>\phi</math> and concatenate them in the latent channels dimension to the noised latents (see Figure [https://arxiv.org/html/2408.14837v1#S3.F3 3]). We also experimented conditioning on these past observations via cross-attention but observed no meaningful improvements.

    We re-purpose a pre-trained text-to-image diffusion model, Stable Diffusion v1.4 (Rombach et al., 2022). We condition the model on trajectories , i.e., on a sequence of previous actions and observations (frames) and remove all text conditioning. Specifically, to condition on actions, we simply learn an embedding from each action (e.g., a specific key press) into a single token and replace the cross attention from the text into this encoded actions sequence. In order to condition on observations (i.e., previous frames), we encode them into latent space using the auto-encoder and concatenate them in the latent channels dimension to the noised latents (see Figure 3). We also experimented conditioning on these past observations via cross-attention but observed no meaningful improvements.