Translations:Neural Networks/20/zh: Difference between revisions

    From Marovi AI
    ([deploy-bot] Translate Neural Networks unit 20 to zh)
    Tag: translation
     
    (Batch translate Neural Networks unit 20 → zh)
    Tag: translation
    Line 1: Line 1:
    '''通用近似定理'''(universal approximation theorem,Cybenko 1989,Hornik 1991)指出:在 <math>\mathbb{R}^n</math> 上的紧致子集上,只要激活函数满足温和条件(例如非常数、有界且连续),具有有限个神经元的单隐藏层前馈网络便能以任意精度逼近任何连续函数。
    '''通用逼近定理'''(Cybenko 1989,Hornik 1991)指出,具有有限数量神经元的单隐藏层前馈网络可以以任意精度逼近 <math>\mathbb{R}^n</math> 紧子集上的任何连续函数,前提是激活函数满足温和的条件(例如,非常数、有界且连续)。

    Revision as of 03:35, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Neural Networks)
    The '''universal approximation theorem''' (Cybenko 1989, Hornik 1991) states that a feedforward network with a single hidden layer containing a finite number of neurons can approximate any continuous function on a compact subset of <math>\mathbb{R}^n</math> to arbitrary accuracy, provided the {{Term|activation function}} satisfies mild conditions (e.g. is non-constant, bounded, and continuous).

    通用逼近定理(Cybenko 1989,Hornik 1991)指出,具有有限数量神经元的单隐藏层前馈网络可以以任意精度逼近 $ \mathbb{R}^n $ 紧子集上的任何连续函数,前提是激活函数满足温和的条件(例如,非常数、有界且连续)。