Translations:Neural Networks/20/zh: Difference between revisions

    From Marovi AI
    (Batch translate Neural Networks unit 20 → zh)
    Tag: translation
    (Batch translate Neural Networks unit 20 → zh)
    Tag: translation
     
    Line 1: Line 1:
    '''通用逼近定理'''(Cybenko 1989, Hornik 1991)指出,具有单一隐藏层并包含有限数量神经元的前馈网络可以以任意精度逼近 <math>\mathbb{R}^n</math> 紧致子集上的任何连续函数,前提是激活函数满足温和的条件(例如非常数、有界且连续)。
    '''通用逼近定理'''(Cybenko 1989,Hornik 1991)指出,含有有限数量神经元的单隐藏层前馈网络可以在 <math>\mathbb{R}^n</math> 的任意紧子集上以任意精度逼近任何连续函数,只要{{Term|activation function|激活函数}}满足温和的条件(例如非常数、有界且连续)。

    Latest revision as of 23:40, 27 April 2026

    Information about message (contribute)
    This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
    Message definition (Neural Networks)
    The '''universal approximation theorem''' (Cybenko 1989, Hornik 1991) states that a feedforward network with a single hidden layer containing a finite number of neurons can approximate any continuous function on a compact subset of <math>\mathbb{R}^n</math> to arbitrary accuracy, provided the {{Term|activation function}} satisfies mild conditions (e.g. is non-constant, bounded, and continuous).

    通用逼近定理(Cybenko 1989,Hornik 1991)指出,含有有限数量神经元的单隐藏层前馈网络可以在 $ \mathbb{R}^n $ 的任意紧子集上以任意精度逼近任何连续函数,只要激活函数满足温和的条件(例如非常数、有界且连续)。