女人脾肾两虚吃什么好| 世界八大奇迹分别是什么| 老人爱睡觉什么征兆| 左下腹部是什么器官| 射精什么意思| 三唑酮主治什么病害| 烧裆是什么原因| 银耳和雪耳有什么区别| 翰字五行属什么| 什么是滑膜炎| 剃光头有什么好处| 支气管炎不能吃什么| 鹦鹉可以吃什么| 指甲盖有竖纹是什么原因| 更年期吃什么药| 安坦又叫什么药| 长期手淫会有什么后果| 猫咪都需要打什么疫苗| 手指甲有竖纹是什么原因| 吴亦凡为什么退出exo| 卵巢早衰是什么意思| 监守自盗什么意思| 霸王别姬是什么菜| 什么叫收缩压和舒张压| vsop是什么酒| 五指毛桃是什么| 嘴酸是什么原因引起| 中药一般什么时候喝最好| 中医心脉受损什么意思| 腋下淋巴结肿大挂什么科| 年岁是什么意思| 么么什么意思| 非营利性医院是什么意思| 痔疮什么样子| 古代上班叫什么| 什么的梦| 什么龙可以横行霸道| 染发有什么危害| vintage什么意思| 医调委是什么机构| 肝右叶钙化灶什么意思| 蓝姓是什么民族| 溪字五行属什么| 跖疣是什么| 宝宝肠炎吃什么药| 泌尿感染吃什么药最好| 楔形是什么形状图片| 结石长什么样子图片| 广州白云区有什么好玩的地方| 积液是什么原因造成的| 298什么意思| 疖肿挂什么科| 擦伤用什么药好得快| 成吉思汗姓什么| 儿女情长英雄气短是什么意思| 治胃病吃什么药| 吃土豆有什么好处和坏处| 举不胜举的举是什么意思| 泌尿感染吃什么药| 矢量是什么意思| 毁谤是什么意思| 什么地移入| zzegna是什么牌子价格| 糖丸是什么疫苗| 补维生素吃什么药最好| 咳嗽头晕是什么原因| 77年属什么生肖| 同字五行属什么| 斑秃用什么药| 肚子疼恶心想吐吃什么药| 好好活着比什么都重要| 下嘴唇溃疡是什么原因| 8月27日什么星座| 焖是什么意思| 狗摇尾巴是什么意思| 梦见小鬼是什么预兆| 死后是什么感觉| 纯牛奶什么时候喝最好| php是什么语言| 下肢动脉硬化吃什么药| 眼睛长黄斑是什么原因| 养猫的人容易得什么病| 脚底冰凉是什么原因| 喝啤酒吃什么菜最好| c2是什么意思| 金利来属于什么档次| 腿痛挂什么科| 糖尿病喝什么茶| 过敏性咳嗽吃什么药| 头晕出汗是什么原因| 糖皮质激素是什么| 御姐范是什么意思| 经期为什么不能拔牙| 口舌痣是什么意思| 为什么身上有红色的痣| 脾虚生痰吃什么中成药| 五花肉炒什么好吃| 日本投降是什么时候| 腺瘤样增生是什么意思| 青海是什么省| 尿液检查白细胞高是什么原因| 脉率是什么| 有什么蔬菜| 回族人为什么不吃猪肉| 悔教夫婿觅封侯是什么意思| 怀孕后壁和前壁有什么区别| 傻瓜是什么生肖| 美女什么都没有穿| 绮字五行属什么| 美缝剂什么牌子的好| 高铁和地铁有什么区别| 人死后会变成什么| 吃什么掉秤快| 肚子容易胀气是什么原因| 端午节都吃什么菜好| 支气管哮喘是什么原因引起的| 乳腺点状钙化是什么意思| 取保候审需要什么条件| 雪糕是什么做的| 尿毒症前兆是什么症状表现| 火疖子吃什么药| 什么心什么血| 碎银子是什么茶| 砼为什么念hun| 分泌物多是什么原因| 授课是什么意思| graves病是什么病| 北极为什么没有企鹅| 无垢是什么意思| 烤鱼什么鱼好吃| 对口升学什么意思| 一级军士长相当于什么级别| 熊猫为什么有黑眼圈| 东北话篮子是什么意思| 痛经吃什么止痛药| 胃不好能吃什么水果| b族维生素是什么意思| 什么是公历年份| 门特是什么意思| 八月六号是什么星座| 高就什么意思| 老年人喝什么牛奶好| 寒露是什么意思| 爽约什么意思| 空调外机为什么会滴水| 四大发明有什么| 贫血吃什么补血效果最好| 1953年属蛇的是什么命| 什么叫支原体阳性| 6月26日什么星座| 肺炎吃什么水果好| 用什么药可以缩阴紧致| 盐水泡脚有什么好处| 吃什么水果容易排便| txt什么意思| 许莫氏结节是什么| 扁桃体发炎吃什么药好得快| 孤儿是什么意思| 甲状腺和甲亢有什么区别| 什么的水流| 贡菜是什么菜做的| 什么叫人工智能| 痛风吃什么蔬菜| 滴虫病女性有什么症状| 沐浴露什么牌子好| 93属什么生肖| 翡翠属于什么五行| 左心室肥大是什么意思| 什么腿| 探望是什么意思| 汞中毒是什么症状| 东宫是什么生肖| 梦见一个人死了是什么意思| 做梦是什么原因造成的| 脚底疼是什么原因| 四川人为什么喜欢吃辣| 梦见知了猴是什么意思| 梦见死人的场面是什么兆头| 寒湿化热吃什么中成药| 大口鱼是什么鱼| 红景天有什么功效| 负离子是什么| 肝内囊性灶什么意思| 蜗牛吃什么东西| 甘油三酯高会引起什么病| 网拍是什么意思| 什么血型最招蚊子咬| 小腿肌肉抽筋是什么原因引起的| 张起灵和吴邪什么关系| 梅核气有什么症状| 老枞水仙属于什么茶| 西瓜又什么又什么填空| 白色代表什么| 石蜡病理是什么意思| 发烧不能吃什么| 白蚂蚁长什么样子图片| 以备不时之需什么意思| 调岗是什么意思| 江郎才尽是什么意思| 大姨妈来了吃什么对身体好| 脚肿了是什么原因引起的| 下午右眼跳是什么预兆| 涌泉穴在什么地方| 儿童肠胃炎吃什么药| 白交念什么| 高筋小麦粉适合做什么| 金鱼可以和什么鱼混养| 夹腿综合症是什么| 烧酒是什么酒| ozark是什么牌子| 湿疹抹什么药| 钢铁侠叫什么名字| omega什么牌子手表| 把妹是什么意思| 乳胶是什么意思| 髂静脉在什么位置| emmm什么意思| 脑卒中是什么病| 1932年属什么| 蛋白酶是什么东西| 五花肉是什么肉| 小手指麻木是什么原因| 诺贝尔奖是什么意思| 什么人不穿衣服| 保育费是什么意思| 螺子黛是什么| pwp是什么意思| 油条吃多了有什么危害| 诛仙讲的是什么故事| 农历10月14日是什么星座| 皮脂腺囊肿用什么药膏| 前列腺增大伴钙化灶是什么意思| 怀孕有积液是什么原因| 感冒干咳无痰吃什么药| 什么是气溶胶| 爱字五行属什么| 发烧喉咙痛吃什么药好| 一月23号是什么星座| 缘是什么生肖| 鹅蛋吃了有什么好处| 1987属什么生肖| 大黄和芒硝混合外敷有什么作用| 四氯化碳是什么| 捆绑是什么意思| 练字用什么笔好| 梦见被狼追是什么意思| 长期尿黄可能是什么病| o型rhd阳性是什么意思| 肌酸激酶高是什么原因| 螺旋幽门杆菌吃什么药治疗好| 女人手脚发热吃什么药| 甲鱼和什么食物相克| 第二次世界大战是什么时候| 脑内多发缺血灶是什么意思| 左眼皮肿是什么原因引起的| 健康证是什么样的| 鱼豆腐是什么做的| 三个箭头朝下是什么牌子| 痔疮痒痒的是什么原因| 晨勃是什么意思| 为什么微信运动总是显示步数为0| 精索静脉曲张什么症状| 休克是什么症状| 看高血压挂什么科| 百度Jump to content

紧跟领路人 再创新辉煌

From Wikipedia, the free encyclopedia
This is the current revision of this page, as edited by 2a02:8071:78e2:7e0:a5b3:741a:8947:10c9 (talk) at 18:55, 27 July 2025 (I believe there was a mistake in the computation of sample_reliability_variance. (see analogous case in online_weighted_covariance algorithm below)). The present address (URL) is a permanent link to this version.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
百度   原本空旷的箭亭广场上,如今布置了9座“小阁”,9个阁都是独立的LED高清展柜,9件国宝就“藏”在柜壁上。

Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.

Na?ve algorithm

[edit]

A formula for calculating the variance of an entire population of size N is:

Using Bessel's correction to calculate an unbiased estimate of the population variance from a finite sample of n observations, the formula is:

Therefore, a na?ve algorithm to calculate the estimated variance is given by the following:

  • Let n ← 0, Sum ← 0, SumSq ← 0
  • For each datum x:
    • nn + 1
    • Sum ← Sum + x
    • SumSq ← SumSq + x × x
  • Var = (SumSq ? (Sum × Sum) / n) / (n ? 1)

This algorithm can easily be adapted to compute the variance of a finite population: simply divide by n instead of n ? 1 on the last line.

Because SumSq and (Sum×Sum)/n can be very similar numbers, cancellation can lead to the precision of the result to be much less than the inherent precision of the floating-point arithmetic used to perform the computation. Thus this algorithm should not be used in practice,[1][2] and several alternate, numerically stable, algorithms have been proposed.[3] This is particularly bad if the standard deviation is small relative to the mean.

Computing shifted data

[edit]

The variance is invariant with respect to changes in a location parameter, a property which can be used to avoid the catastrophic cancellation in this formula.

with any constant, which leads to the new formula

the closer is to the mean value the more accurate the result will be, but just choosing a value inside the samples range will guarantee the desired stability. If the values are small then there are no problems with the sum of its squares, on the contrary, if they are large it necessarily means that the variance is large as well. In any case the second term in the formula is always smaller than the first one therefore no cancellation may occur.[2]

If just the first sample is taken as the algorithm can be written in Python programming language as

def shifted_data_variance(data):
    if len(data) < 2:
        return 0.0
    K = data[0]
    n = Ex = Ex2 = 0.0
    for x in data:
        n += 1
        Ex += x - K
        Ex2 += (x - K) ** 2
    variance = (Ex2 - Ex**2 / n) / (n - 1)
    # use n instead of (n-1) if want to compute the exact variance of the given data
    # use (n-1) if data are samples of a larger population
    return variance

This formula also facilitates the incremental computation that can be expressed as

K = Ex = Ex2 = 0.0
n = 0


def add_variable(x):
    global K, n, Ex, Ex2
    if n == 0:
        K = x
    n += 1
    Ex += x - K
    Ex2 += (x - K) ** 2

def remove_variable(x):
    global K, n, Ex, Ex2
    n -= 1
    Ex -= x - K
    Ex2 -= (x - K) ** 2

def get_mean():
    global K, n, Ex
    return K + Ex / n

def get_variance():
    global n, Ex, Ex2
    return (Ex2 - Ex**2 / n) / (n - 1)

Two-pass algorithm

[edit]

An alternative approach, using a different formula for the variance, first computes the sample mean,

and then computes the sum of the squares of the differences from the mean,

where s is the standard deviation. This is given by the following code:

def two_pass_variance(data):
    n = len(data)
    mean = sum(data) / n
    variance = sum((x - mean) ** 2 for x in data) / (n - 1)
    return variance

This algorithm is numerically stable if n is small.[1][4] However, the results of both of these simple algorithms ("na?ve" and "two-pass") can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to combat this error to a degree.

Welford's online algorithm

[edit]

It is often useful to be able to compute the variance in a single pass, inspecting each value only once; for example, when the data is being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an online algorithm, a recurrence relation is required between quantities from which the required statistics can be calculated in a numerically stable fashion.

The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element xn. Here, denotes the sample mean of the first n samples , their biased sample variance, and their unbiased sample variance.

These formulas suffer from numerical instability [citation needed], as they repeatedly subtract a small number from a big number which scales with n. A better quantity for updating is the sum of squares of differences from the current mean, , here denoted :

This algorithm was found by Welford,[5][6] and it has been thoroughly analyzed.[2][7] It is also common to denote and .[8]

An example Python implementation for Welford's algorithm is given below.

# For a new value new_value, compute the new count, new mean, the new M2.
# mean accumulates the mean of the entire dataset
# M2 aggregates the squared distance from the mean
# count aggregates the number of samples seen so far
def update(existing_aggregate, new_value):
    (count, mean, M2) = existing_aggregate
    count += 1
    delta = new_value - mean
    mean += delta / count
    delta2 = new_value - mean
    M2 += delta * delta2
    return (count, mean, M2)

# Retrieve the mean, variance and sample variance from an aggregate
def finalize(existing_aggregate):
    (count, mean, M2) = existing_aggregate
    if count < 2:
        return float("nan")
    else:
        (mean, variance, sample_variance) = (mean, M2 / count, M2 / (count - 1))
        return (mean, variance, sample_variance)

This algorithm is much less prone to loss of precision due to catastrophic cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, one can first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.

The parallel algorithm below illustrates how to merge multiple sets of statistics calculated online.

Weighted incremental algorithm

[edit]

The algorithm can be extended to handle unequal sample weights, replacing the simple counter n with the sum of weights seen so far. West (1979)[9] suggests this incremental algorithm:

def weighted_incremental_variance(data_weight_pairs):
    w_sum = w_sum2 = mean = S = 0

    for x, w in data_weight_pairs:
        w_sum = w_sum + w
        w_sum2 = w_sum2 + w**2
        mean_old = mean
        mean = mean_old + (w / w_sum) * (x - mean_old)
        S = S + w * (x - mean_old) * (x - mean)

    population_variance = S / w_sum
    # Bessel's correction for weighted samples
    # Frequency weights
    sample_frequency_variance = S / (w_sum - 1)
 	
    # Reliability weights
    sample_reliability_variance = S / (w_sum - w_sum2 / w_sum)

Parallel algorithm

[edit]

Chan et al.[10] note that Welford's online algorithm detailed above is a special case of an algorithm that works for combining arbitrary sets and :

.

This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input.

Chan's method for estimating the mean is numerically unstable when and both are large, because the numerical error in is not scaled down in the way that it is in the case. In such cases, prefer .

def parallel_variance(n_a, avg_a, M2_a, n_b, avg_b, M2_b):
    n = n_a + n_b
    delta = avg_b - avg_a
    M2 = M2_a + M2_b + delta**2 * n_a * n_b / n
    var_ab = M2 / (n - 1)
    return var_ab

This can be generalized to allow parallelization with AVX, with GPUs, and computer clusters, and to covariance.[3]

Example

[edit]

Assume that all floating point operations use standard IEEE 754 double-precision[broken anchor] arithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both the na?ve algorithm and two-pass algorithm compute these values correctly.

Next consider the sample (108 + 4, 108 + 7, 108 + 13, 108 + 16), which gives rise to the same estimated variance as the first sample. The two-pass algorithm computes this variance estimate correctly, but the na?ve algorithm returns 29.333333333333332 instead of 30.

While this loss of precision may be tolerable and viewed as a minor flaw of the na?ve algorithm, further increasing the offset makes the error catastrophic. Consider the sample (109 + 4, 109 + 7, 109 + 13, 109 + 16). Again the estimated population variance of 30 is computed correctly by the two-pass algorithm, but the na?ve algorithm now computes it as ?170.66666666666666. This is a serious problem with na?ve algorithm and is due to catastrophic cancellation in the subtraction of two similar numbers at the final stage of the algorithm.

Higher-order statistics

[edit]

Terriberry[11] extends Chan's formulae to calculating the third and fourth central moments, needed for example when estimating skewness and kurtosis:

Here the are again the sums of powers of differences from the mean , giving

For the incremental case (i.e., ), this simplifies to:

By preserving the value , only one division operation is needed and the higher-order statistics can thus be calculated for little incremental cost.

An example of the online algorithm for kurtosis implemented as described is:

def online_kurtosis(data):
    n = mean = M2 = M3 = M4 = 0

    for x in data:
        n1 = n
        n = n + 1
        delta = x - mean
        delta_n = delta / n
        delta_n2 = delta_n**2
        term1 = delta * delta_n * n1
        mean = mean + delta_n
        M4 = M4 + term1 * delta_n2 * (n**2 - 3*n + 3) + 6 * delta_n2 * M2 - 4 * delta_n * M3
        M3 = M3 + term1 * delta_n * (n - 2) - 3 * delta_n * M2
        M2 = M2 + term1

    # Note, you may also calculate variance using M2, and skewness using M3
    # Caution: If all the inputs are the same, M2 will be 0, resulting in a division by 0.
    kurtosis = (n * M4) / (M2**2) - 3
    return kurtosis

Péba?[12] further extends these results to arbitrary-order central moments, for the incremental and the pairwise cases, and subsequently Péba? et al.[13] for weighted and compound moments. One can also find there similar formulas for covariance.

Choi and Sweetman[14] offer two alternative methods to compute the skewness and kurtosis, each of which can save substantial computer memory requirements and CPU time in certain applications. The first approach is to compute the statistical moments by separating the data into bins and then computing the moments from the geometry of the resulting histogram, which effectively becomes a one-pass algorithm for higher moments. One benefit is that the statistical moment calculations can be carried out to arbitrary accuracy such that the computations can be tuned to the precision of, e.g., the data storage format or the original measurement hardware. A relative histogram of a random variable can be constructed in the conventional way: the range of potential values is divided into bins and the number of occurrences within each bin are counted and plotted such that the area of each rectangle equals the portion of the sample values within that bin:

where and represent the frequency and the relative frequency at bin and is the total area of the histogram. After this normalization, the raw moments and central moments of can be calculated from the relative histogram:

where the superscript indicates the moments are calculated from the histogram. For constant bin width these two expressions can be simplified using :

The second approach from Choi and Sweetman[14] is an analytical methodology to combine statistical moments from individual segments of a time-history such that the resulting overall moments are those of the complete time-history. This methodology could be used for parallel computation of statistical moments with subsequent combination of those moments, or for combination of statistical moments computed at sequential times.

If sets of statistical moments are known: for , then each can be expressed in terms of the equivalent raw moments:

where is generally taken to be the duration of the time-history, or the number of points if is constant.

The benefit of expressing the statistical moments in terms of is that the sets can be combined by addition, and there is no upper limit on the value of .

where the subscript represents the concatenated time-history or combined . These combined values of can then be inversely transformed into raw moments representing the complete concatenated time-history

Known relationships between the raw moments () and the central moments () are then used to compute the central moments of the concatenated time-history. Finally, the statistical moments of the concatenated history are computed from the central moments:

Covariance

[edit]

Very similar algorithms can be used to compute the covariance.

Na?ve algorithm

[edit]

The na?ve algorithm is

For the algorithm above, one could use the following Python code:

def naive_covariance(data1, data2):
    n = len(data1)
    sum1 = sum(data1)
    sum2 = sum(data2)
    sum12 = sum([i1 * i2 for i1, i2 in zip(data1, data2)])

    covariance = (sum12 - sum1 * sum2 / n) / n
    return covariance

With estimate of the mean

[edit]

As for the variance, the covariance of two random variables is also shift-invariant, so given any two constant values and it can be written:

and again choosing a value inside the range of values will stabilize the formula against catastrophic cancellation as well as make it more robust against big sums. Taking the first value of each data set, the algorithm can be written as:

def shifted_data_covariance(data_x, data_y):
    n = len(data_x)
    if n < 2:
        return 0
    kx = data_x[0]
    ky = data_y[0]
    Ex = Ey = Exy = 0
    for ix, iy in zip(data_x, data_y):
        Ex += ix - kx
        Ey += iy - ky
        Exy += (ix - kx) * (iy - ky)
    return (Exy - Ex * Ey / n) / n

Two-pass

[edit]

The two-pass algorithm first computes the sample means, and then the covariance:

The two-pass algorithm may be written as:

def two_pass_covariance(data1, data2):
    n = len(data1)
    mean1 = sum(data1) / n
    mean2 = sum(data2) / n

    covariance = 0
    for i1, i2 in zip(data1, data2):
        a = i1 - mean1
        b = i2 - mean2
        covariance += a * b / n
    return covariance

A slightly more accurate compensated version performs the full naive algorithm on the residuals. The final sums and should be zero, but the second pass compensates for any small error.

Online

[edit]

A stable one-pass algorithm exists, similar to the online algorithm for computing the variance, that computes co-moment :

The apparent asymmetry in that last equation is due to the fact that , so both update terms are equal to . Even greater accuracy can be achieved by first computing the means, then using the stable one-pass algorithm on the residuals.

Thus the covariance can be computed as

def online_covariance(data1, data2):
    meanx = meany = C = n = 0
    for x, y in zip(data1, data2):
        n += 1
        dx = x - meanx
        meanx += dx / n
        meany += (y - meany) / n
        C += dx * (y - meany)

    population_covar = C / n
    # Bessel's correction for sample variance
    sample_covar = C / (n - 1)

A small modification can also be made to compute the weighted covariance:

def online_weighted_covariance(data1, data2, data3):
    meanx = meany = 0
    wsum = wsum2 = 0
    C = 0
    for x, y, w in zip(data1, data2, data3):
        wsum += w
        wsum2 += w * w
        dx = x - meanx
        meanx += (w / wsum) * dx
        meany += (w / wsum) * (y - meany)
        C += w * dx * (y - meany)

    population_covar = C / wsum
    # Bessel's correction for sample variance
    # Frequency weights
    sample_frequency_covar = C / (wsum - 1)
    # Reliability weights
    sample_reliability_covar = C / (wsum - wsum2 / wsum)

Likewise, there is a formula for combining the covariances of two sets that can be used to parallelize the computation:[3]

Weighted batched version

[edit]

A version of the weighted online algorithm that does batched updated also exists: let denote the weights, and write

The covariance can then be computed as

See also

[edit]

References

[edit]
  1. ^ a b Einarsson, Bo (2005). Accuracy and Reliability in Scientific Computing. SIAM. p. 47. ISBN 978-0-89871-584-2.
  2. ^ a b c Chan, Tony F.; Golub, Gene H.; LeVeque, Randall J. (1983). "Algorithms for computing the sample variance: Analysis and recommendations" (PDF). The American Statistician. 37 (3): 242–247. doi:10.1080/00031305.1983.10483115. JSTOR 2683386. Archived (PDF) from the original on 9 October 2022.
  3. ^ a b c Schubert, Erich; Gertz, Michael (9 July 2018). Numerically stable parallel computation of (co-)variance. ACM. p. 10. doi:10.1145/3221269.3223036. ISBN 9781450365055. S2CID 49665540.
  4. ^ Higham, Nicholas J. (2002). "Problem 1.10". Accuracy and Stability of Numerical Algorithms (2nd ed.). Philadelphia, PA: Society for Industrial and Applied Mathematics. doi:10.1137/1.9780898718027. ISBN 978-0-898715-21-7. eISBN 978-0-89871-802-7, 2002075848. Metadata also listed at ACM Digital Library.
  5. ^ Welford, B. P. (1962). "Note on a method for calculating corrected sums of squares and products". Technometrics. 4 (3): 419–420. doi:10.2307/1266577. JSTOR 1266577.
  6. ^ Donald E. Knuth (1998). The Art of Computer Programming, volume 2: Seminumerical Algorithms, 3rd edn., p. 232. Boston: Addison-Wesley.
  7. ^ Ling, Robert F. (1974). "Comparison of Several Algorithms for Computing Sample Means and Variances". Journal of the American Statistical Association. 69 (348): 859–866. doi:10.2307/2286154. JSTOR 2286154.
  8. ^ Cook, John D. (30 September 2022) [1 November 2014]. "Accurately computing sample variance". John D. Cook Consulting: Expert consulting in applied mathematics & data privacy.
  9. ^ West, D. H. D. (1979). "Updating Mean and Variance Estimates: An Improved Method". Communications of the ACM. 22 (9): 532–535. doi:10.1145/359146.359153. S2CID 30671293.
  10. ^ Chan, Tony F.; Golub, Gene H.; LeVeque, Randall J. (November 1979). "Updating Formulae and a Pairwise Algorithm for Computing Sample Variances" (PDF). Department of Computer Science, Stanford University. Technical Report STAN-CS-79-773, supported in part by Army contract No. DAAGEI-‘E-G-013.
  11. ^ Terriberry, Timothy B. (15 October 2008) [9 December 2007]. "Computing Higher-Order Moments Online". Archived from the original on 23 April 2014. Retrieved 5 May 2008.
  12. ^ Pébay, Philippe Pierre (September 2008). "Formulas for Robust, One-Pass Parallel Computation of Covariances and Arbitrary-Order Statistical Moments". Sponsoring Org.: USDOE. Albuquerque, NM, and Livermore, CA (United States): Sandia National Laboratories (SNL). doi:10.2172/1028931. OSTI 1028931. Technical Report SAND2008-6212, TRN: US201201%%57, DOE Contract Number: AC04-94AL85000 – via UNT Digital Library.
  13. ^ Péba?, Philippe; Terriberry, Timothy; Kolla, Hemanth; Bennett, Janine (2016). "Numerically Stable, Scalable Formulas for Parallel and Online Computation of Higher-Order Multivariate Central Moments with Arbitrary Weights". Computational Statistics. 31 (4). Springer: 1305–1325. doi:10.1007/s00180-015-0637-z. S2CID 124570169.
  14. ^ a b Choi, Myoungkeun; Sweetman, Bert (2010). "Efficient Calculation of Statistical Moments for Structural Health Monitoring". Journal of Structural Health Monitoring. 9 (1): 13–24. doi:10.1177/1475921709341014. S2CID 17534100.
[edit]
乙肝表面抗体阳性是什么意思 吃银耳有什么好处和坏处 Polo什么意思 essence什么意思 鹅吃什么
贫血的人来姨妈会有什么症状 拉烂屎是什么原因 2月25日是什么星座 玻璃五行属什么 验孕棒一深一浅代表什么
100分能上什么大学 hc是胎儿的什么 耳石症是什么意思 力不到不为财是什么意思 3月26日是什么节日
缺铁性贫血严重会导致什么后果 中枢是什么意思 磷高吃什么药 希尔福是什么药 随餐服用是什么时候吃
激素六项检查挂什么科hcv8jop6ns5r.cn 肌肉萎缩是什么症状hcv9jop2ns7r.cn 985和211是什么意思imcecn.com 刑冲破害是什么意思hcv9jop0ns9r.cn 天天喝豆浆有什么好处和坏处hcv8jop3ns6r.cn
下巴长痘痘是什么原因引起的hcv9jop3ns0r.cn 藏红花可以搭配什么泡水喝hcv9jop4ns2r.cn 圣诞节礼物什么时候送hcv8jop2ns0r.cn 反流性食管炎吃什么药hcv7jop6ns6r.cn 警察两杠一星是什么级别hcv8jop3ns8r.cn
眼睛看什么科imcecn.com 不堪入目是什么意思hcv7jop6ns1r.cn 脚气是什么症状hcv9jop5ns4r.cn 力所能及什么意思hcv7jop4ns7r.cn 6.16是什么星座hcv7jop4ns6r.cn
绿野仙踪是什么意思hcv9jop6ns8r.cn 洗澡有什么好处hcv9jop2ns1r.cn 小布丁是什么意思aiwuzhiyu.com 黄喉是什么部位hcv8jop8ns6r.cn or是什么意思hcv9jop1ns0r.cn
百度