Translations:Neural Networks/28/zh: Difference between revisions

    From Marovi AI
    (Batch translate Neural Networks unit 28 → zh)
    Tag: translation
    (Batch translate Neural Networks unit 28 → zh)
    Tag: translation
    Line 1: Line 1:
    * '''[[Convolutional Neural Networks]]'''(CNN)——专为图像等网格结构数据设计,使用局部连接和权重共享。
    * '''[[Convolutional Neural Networks]]'''(CNN)— 为图像等网格结构数据设计,使用局部连接和权重共享。
    * '''[[Recurrent Neural Networks]]'''(RNN)——专为序列数据设计,连接形成环路以维持隐藏状态。
    * '''[[Recurrent Neural Networks]]'''(RNN)— 为序列数据设计,连接形成循环以维持隐藏状态。
    * '''Transformers'''——基于 attention 的架构,已在自然语言处理中占主导地位,并越来越多地用于视觉领域。
    * '''Transformers''' — 基于注意力的架构,已在自然语言处理领域占据主导地位,并在视觉领域日益普及。
    * '''Autoencoders'''——经过训练来重建其输入的网络,用于降维和生成建模。
    * '''Autoencoders''' — 训练用于重建其输入的网络,用于降维和生成建模。
    * '''生成对抗网络'''(GAN)——成对的网络(生成器和判别器)以竞争方式训练以生成逼真的数据。
    * '''生成对抗网络'''(GAN)— 成对的网络(生成器和判别器)在竞争中进行训练,以生成逼真的数据。

    Revision as of 22:03, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Neural Networks)
    * '''[[Convolutional Neural Networks]]''' (CNNs) — designed for grid-structured data such as images, using local connectivity and weight sharing.
    * '''[[Recurrent Neural Networks]]''' (RNNs) — designed for sequential data, with connections that form cycles to maintain hidden state.
    * '''{{Term|transformer|Transformers}}''' — {{Term|attention}}-based architectures that have become dominant in natural language processing and increasingly in vision.
    * '''Autoencoders''' — networks trained to reconstruct their input, used for dimensionality reduction and generative modelling.
    * '''Generative adversarial networks''' (GANs) — pairs of networks (generator and discriminator) trained in competition to generate realistic data.
    • Convolutional Neural Networks(CNN)— 为图像等网格结构数据设计,使用局部连接和权重共享。
    • Recurrent Neural Networks(RNN)— 为序列数据设计,连接形成循环以维持隐藏状态。
    • Transformers — 基于注意力的架构,已在自然语言处理领域占据主导地位,并在视觉领域日益普及。
    • Autoencoders — 训练用于重建其输入的网络,用于降维和生成建模。
    • 生成对抗网络(GAN)— 成对的网络(生成器和判别器)在竞争中进行训练,以生成逼真的数据。