Translations:Neural Networks/28/zh: Difference between revisions

    From Marovi AI
    ([deploy-bot] Translate Neural Networks unit 28 to zh)
    Tag: translation
     
    (Batch translate Neural Networks unit 28 → zh)
    Tag: translation
    Line 1: Line 1:
    * '''[[Convolutional Neural Networks|卷积神经网络]]'''(CNN)—— 面向图像等网格结构数据,采用局部连接与权重共享。
    * '''[[Convolutional Neural Networks]]'''(CNN)——专为图像等网格结构数据设计,使用局部连接和权重共享。
    * '''[[Recurrent Neural Networks|循环神经网络]]'''(RNN)—— 面向序列数据,通过形成回路的连接维持隐藏状态。
    * '''[[Recurrent Neural Networks]]'''(RNN)——专为序列数据设计,连接形成环路以维持隐藏状态。
    * '''Transformer''' —— 基于注意力机制的架构,已在自然语言处理领域占据主导地位,并日益应用于视觉领域。
    * '''Transformers'''——基于 attention 的架构,已在自然语言处理中占主导地位,并越来越多地用于视觉领域。
    * '''自编码器'''(autoencoder)—— 训练目标是重建输入,常用于降维与生成式建模。
    * '''Autoencoders'''——经过训练来重建其输入的网络,用于降维和生成建模。
    * '''生成对抗网络'''(GAN)—— 由生成器与判别器两个网络相互对抗训练,用于生成逼真的数据。
    * '''生成对抗网络'''(GAN)——成对的网络(生成器和判别器)以竞争方式训练以生成逼真的数据。

    Revision as of 03:35, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Neural Networks)
    * '''[[Convolutional Neural Networks]]''' (CNNs) — designed for grid-structured data such as images, using local connectivity and weight sharing.
    * '''[[Recurrent Neural Networks]]''' (RNNs) — designed for sequential data, with connections that form cycles to maintain hidden state.
    * '''{{Term|transformer|Transformers}}''' — {{Term|attention}}-based architectures that have become dominant in natural language processing and increasingly in vision.
    * '''Autoencoders''' — networks trained to reconstruct their input, used for dimensionality reduction and generative modelling.
    * '''Generative adversarial networks''' (GANs) — pairs of networks (generator and discriminator) trained in competition to generate realistic data.
    • Convolutional Neural Networks(CNN)——专为图像等网格结构数据设计,使用局部连接和权重共享。
    • Recurrent Neural Networks(RNN)——专为序列数据设计,连接形成环路以维持隐藏状态。
    • Transformers——基于 attention 的架构,已在自然语言处理中占主导地位,并越来越多地用于视觉领域。
    • Autoencoders——经过训练来重建其输入的网络,用于降维和生成建模。
    • 生成对抗网络(GAN)——成对的网络(生成器和判别器)以竞争方式训练以生成逼真的数据。