venes保温杯是什么品牌| 刘强东开什么车| 心脏扩大吃什么药好| 倾字五行属什么| 什么情况需要打破伤风| 阑尾在人体的什么位置| 二氧化碳是什么东西| 4月4日是什么星座| 黄色衣服配什么颜色裤子好看| 脑梗做什么检查| 寒是什么意思| 脂肪肝喝什么茶| 蜈蚣最怕什么| 情字五行属什么| 宫腔内钙化灶是什么意思| 六月二十日是什么日子| 甲状腺结节是什么症状| 肛门下坠感是什么原因| 生长激素由什么分泌| 喉咙痛吃什么药好| 广东有什么特产| 平诊是什么意思| 长沙有什么山| 小孩子发烧吃什么药| pm是什么职位| 查肾结石挂什么科| 腹泻用什么药| 精索静脉曲张什么症状| 白喉是什么病| 马栗是什么植物| 看空是什么意思| 心是什么| 是什么拼音| 玉髓什么颜色最贵| 五加一笔是什么字| 十一月十九是什么星座| 什么牌子的辅酶q10好| medium什么意思| 口臭睡觉流口水什么原因| 屁多还臭是什么原因| 下肢血管堵塞吃什么药| 什么牌子的氨基酸洗面奶好| 尿蛋白质阳性是什么意思| 抗炎和消炎有什么区别| c919是什么意思| 早上起来口苦吃什么药| 高血糖吃什么食物好| hrd阳性是什么意思| 卟是什么意思| 矢车菊在中国叫什么名| 什么是外阴| 五花肉炒什么配菜好吃| 笏是什么意思| 誉之曰的之是什么意思| 高血钾有什么症状| 良字少一点是什么字| 前列腺增生吃什么药最好| 茄子与什么相克| 肉身成圣是什么意思| 肺占位病变是什么意思| 射频消融是什么手术| 早起嘴苦是什么原因| 脚水肿是什么原因引起的| 生快是什么意思| 一蹴而就什么意思| 国防部长是什么级别| 尾巴长长的是什么鸟| 口干舌燥是什么意思| 螨虫用什么药可以杀死它| vb是什么| 清炖羊肉放什么调料| 好色是什么意思| 甲沟炎什么症状| 2021年属什么生肖| 梦见好多狗是什么预兆| 什么的流水| 多吃丝瓜有什么好处和坏处| 下午8点是什么时辰| 74年属什么生肖| 大将军衔相当于什么官| 碧色是什么颜色| 神昏谵语是什么意思| 零八年属什么| 一本线是什么意思| 猫摇尾巴是什么意思| 咂嘴是什么意思| 三文鱼长什么样| 淋巴细胞百分比偏高是什么意思| 庄子姓什么| 中国什么时候解放| 猪的五行属什么| 吃饭后胃胀是什么原因| 朱砂是什么做的| 城隍庙求什么最灵| 网恋是什么意思| alt医学上是什么意思| 3月26日是什么节日| 隔离霜有什么作用| 慢性宫颈炎是什么意思| 命途多舛是什么意思| a代表什么| 脸上肉跳动是什么原因| 拉肚子能吃什么食物| 放养是什么意思| 嗓子疼流鼻涕吃什么药| 舌头什么颜色正常| 沙漠有什么动物| 突然心慌是什么原因| 单绒双羊是什么意思| 鼻子老是出血是什么原因| 白是什么结构的字| 什么是命题| 清炖排骨都放什么调料| 梦见杀鸡见血什么征兆| 鸡眼去医院挂什么科| 继发性不孕是什么意思| 掌中宝是什么东西| 头发突然秃了一块是什么原因| 样本是什么意思| 什么样的黄河| 8月是什么月| 三年级用什么笔| 宝宝是什么意思| 21三体临界风险是什么意思| 11度穿什么衣服| 什么糖不能吃| 兵马未动粮草先行是什么意思| 3月31号什么星座| 孕妇快生了有什么症状| 心脏疼吃什么药| 号是什么| 移徙是什么意思| 心烦意乱焦躁不安吃什么药| 木鱼花为什么会动| 乌豆和黑豆有什么区别| 糖尿病患者主食应该吃什么| 什么是人肉搜索| 地奥心血康软胶囊主治什么病| 远香近臭什么意思| 猫是什么| 尿酸低有什么危害| 经常咳嗽是什么原因| 脑梗什么原因导致的| rian是什么意思| 闹代表什么生肖| 农历12月18日是什么星座| 宝宝流鼻涕吃什么药| 什么的嗓音| 参见是什么意思| 男人精子少是什么原因| 长期喝什么茶能降三高| 经期为什么不能拔牙| 80分贝相当于什么声音| 蛇为什么怕鹅| 女人吃维生素b有什么好处| 买盘和卖盘是什么意思| 江小白加雪碧什么意思| 肝属于五行中的什么| 什么是混合痔| 天秤女和什么座最配对| 治骨质疏松打什么针| 厉兵秣马什么意思| 主观意识是什么意思| 细菌性阴道炎吃什么药| 乌龟死了有什么预兆| 晚上睡觉脚酸痛什么原因| 约炮什么意思| 阴历7月22是什么日子| 胆囊是什么| 脾湿热吃什么中成药| 载脂蛋白a1偏高是什么原因| 小白脸是什么意思| 早上吃玉米有什么好处| 洛阳以前叫什么名字| 无创是什么检查| 断章取义是什么生肖| 乙酰氨基酚是什么药| alienware是什么牌子| 火召是什么字| 风流人物指什么生肖| 脚背有痣代表什么| 卑微是什么意思| 关门弟子是什么意思| 偷窥是什么意思| 什么是毛囊炎| 尿道口有烧灼感为什么| 小儿惊痫是什么症状| 豆浆机什么牌子好| 大土土什么字| 什么叫变态| 晚上20点是什么时辰| 倒数第二颗牙齿叫什么| 中央空调什么牌子好| 砼为什么念hun| 三个土是什么字怎么读| 痛风挂什么科室| 鱼露是什么东西| 一什么彩虹| 渡人是什么意思| 老板是什么意思| 沙悟净的武器叫什么| 最近老坏东西暗示什么| 皮肤容易晒黑是什么原因| 弥漫什么意思| 北京都有什么大学| 博五行属性是什么| 乌合之众是什么意思| 藏红花和什么一起泡水喝效果好| 灵芝长在什么地方| 狗狗咬主人意味着什么| 梦见摘瓜是什么意思啊| 菌群异常是什么意思| 白内障是什么原因引起的| 起酥油是什么东西| 心绞痛吃什么药好| 胆囊壁固醇沉积是什么意思| 一个月小猫吃什么| 盆腔彩超检查什么| 炉火什么什么| ol是什么| 三基色是什么颜色| 喝莓茶对身体有什么好处| 胃酸是什么| 凌晨两点半是什么时辰| 女生被口是什么感觉| 夏天有什么花开| 蜘蛛吃什么| 淫羊藿治什么病| 樱桃有什么营养价值| 小孩走路迟是什么原因| 借力是什么意思| 外阴痒用什么洗| 白手套是什么意思| 人为什么要喝水| 身份证照片穿什么颜色衣服| 六神无主是什么生肖| 相得益彰意思是什么| 内分泌失调吃什么调理| 肺炎不能吃什么| 便秘吃什么可以调理| 宝宝爱出汗是什么原因| 弱水三千是什么意思| 佝偻病是什么症状| 肠胃不好吃什么菜比较好| 乙肝对身体有什么影响| 全国政协副主席是什么级别| 羊鞭是什么部位| 眼睛有点模糊是什么原因| 考法医需要什么条件| 孕妇吃什么能马上通便| 人为什么会感冒| 宝宝消化不好吃什么调理| 喉咙痛不能吃什么| 叟是什么意思| 总放屁还特别臭是什么原因| 膝盖疼痛是什么原因| 吃什么补眼睛| 什么颜色加什么颜色是紫色| 小儿鼻炎用什么药好| 土生土长是什么生肖| 故友是什么意思| edo是什么意思| 匹夫是什么意思| 什么鱼吃玉米| 百度Jump to content

大师用车|行车记录仪应该如何选购 画面清晰重

From Wikipedia, the free encyclopedia
百度   你的后路,却挡了别人前程。

Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.

Na?ve algorithm

[edit]

A formula for calculating the variance of an entire population of size N is:

Using Bessel's correction to calculate an unbiased estimate of the population variance from a finite sample of n observations, the formula is:

Therefore, a na?ve algorithm to calculate the estimated variance is given by the following:

  • Let n ← 0, Sum ← 0, SumSq ← 0
  • For each datum x:
    • nn + 1
    • Sum ← Sum + x
    • SumSq ← SumSq + x × x
  • Var = (SumSq ? (Sum × Sum) / n) / (n ? 1)

This algorithm can easily be adapted to compute the variance of a finite population: simply divide by n instead of n ? 1 on the last line.

Because SumSq and (Sum×Sum)/n can be very similar numbers, cancellation can lead to the precision of the result to be much less than the inherent precision of the floating-point arithmetic used to perform the computation. Thus this algorithm should not be used in practice,[1][2] and several alternate, numerically stable, algorithms have been proposed.[3] This is particularly bad if the standard deviation is small relative to the mean.

Computing shifted data

[edit]

The variance is invariant with respect to changes in a location parameter, a property which can be used to avoid the catastrophic cancellation in this formula.

with any constant, which leads to the new formula

the closer is to the mean value the more accurate the result will be, but just choosing a value inside the samples range will guarantee the desired stability. If the values are small then there are no problems with the sum of its squares, on the contrary, if they are large it necessarily means that the variance is large as well. In any case the second term in the formula is always smaller than the first one therefore no cancellation may occur.[2]

If just the first sample is taken as the algorithm can be written in Python programming language as

def shifted_data_variance(data):
    if len(data) < 2:
        return 0.0
    K = data[0]
    n = Ex = Ex2 = 0.0
    for x in data:
        n += 1
        Ex += x - K
        Ex2 += (x - K) ** 2
    variance = (Ex2 - Ex**2 / n) / (n - 1)
    # use n instead of (n-1) if want to compute the exact variance of the given data
    # use (n-1) if data are samples of a larger population
    return variance

This formula also facilitates the incremental computation that can be expressed as

K = Ex = Ex2 = 0.0
n = 0


def add_variable(x):
    global K, n, Ex, Ex2
    if n == 0:
        K = x
    n += 1
    Ex += x - K
    Ex2 += (x - K) ** 2

def remove_variable(x):
    global K, n, Ex, Ex2
    n -= 1
    Ex -= x - K
    Ex2 -= (x - K) ** 2

def get_mean():
    global K, n, Ex
    return K + Ex / n

def get_variance():
    global n, Ex, Ex2
    return (Ex2 - Ex**2 / n) / (n - 1)

Two-pass algorithm

[edit]

An alternative approach, using a different formula for the variance, first computes the sample mean,

and then computes the sum of the squares of the differences from the mean,

where s is the standard deviation. This is given by the following code:

def two_pass_variance(data):
    n = len(data)
    mean = sum(data) / n
    variance = sum((x - mean) ** 2 for x in data) / (n - 1)
    return variance

This algorithm is numerically stable if n is small.[1][4] However, the results of both of these simple algorithms ("na?ve" and "two-pass") can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to combat this error to a degree.

Welford's online algorithm

[edit]

It is often useful to be able to compute the variance in a single pass, inspecting each value only once; for example, when the data is being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an online algorithm, a recurrence relation is required between quantities from which the required statistics can be calculated in a numerically stable fashion.

The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element xn. Here, denotes the sample mean of the first n samples , their biased sample variance, and their unbiased sample variance.

These formulas suffer from numerical instability [citation needed], as they repeatedly subtract a small number from a big number which scales with n. A better quantity for updating is the sum of squares of differences from the current mean, , here denoted :

This algorithm was found by Welford,[5][6] and it has been thoroughly analyzed.[2][7] It is also common to denote and .[8]

An example Python implementation for Welford's algorithm is given below.

# For a new value new_value, compute the new count, new mean, the new M2.
# mean accumulates the mean of the entire dataset
# M2 aggregates the squared distance from the mean
# count aggregates the number of samples seen so far
def update(existing_aggregate, new_value):
    (count, mean, M2) = existing_aggregate
    count += 1
    delta = new_value - mean
    mean += delta / count
    delta2 = new_value - mean
    M2 += delta * delta2
    return (count, mean, M2)

# Retrieve the mean, variance and sample variance from an aggregate
def finalize(existing_aggregate):
    (count, mean, M2) = existing_aggregate
    if count < 2:
        return float("nan")
    else:
        (mean, variance, sample_variance) = (mean, M2 / count, M2 / (count - 1))
        return (mean, variance, sample_variance)

This algorithm is much less prone to loss of precision due to catastrophic cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, one can first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.

The parallel algorithm below illustrates how to merge multiple sets of statistics calculated online.

Weighted incremental algorithm

[edit]

The algorithm can be extended to handle unequal sample weights, replacing the simple counter n with the sum of weights seen so far. West (1979)[9] suggests this incremental algorithm:

def weighted_incremental_variance(data_weight_pairs):
    w_sum = w_sum2 = mean = S = 0

    for x, w in data_weight_pairs:
        w_sum = w_sum + w
        w_sum2 = w_sum2 + w**2
        mean_old = mean
        mean = mean_old + (w / w_sum) * (x - mean_old)
        S = S + w * (x - mean_old) * (x - mean)

    population_variance = S / w_sum
    # Bessel's correction for weighted samples
    # Frequency weights
    sample_frequency_variance = S / (w_sum - 1)
 	
    # Reliability weights
    sample_reliability_variance = S / (w_sum - w_sum2 / w_sum)

Parallel algorithm

[edit]

Chan et al.[10] note that Welford's online algorithm detailed above is a special case of an algorithm that works for combining arbitrary sets and :

.

This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input.

Chan's method for estimating the mean is numerically unstable when and both are large, because the numerical error in is not scaled down in the way that it is in the case. In such cases, prefer .

def parallel_variance(n_a, avg_a, M2_a, n_b, avg_b, M2_b):
    n = n_a + n_b
    delta = avg_b - avg_a
    M2 = M2_a + M2_b + delta**2 * n_a * n_b / n
    var_ab = M2 / (n - 1)
    return var_ab

This can be generalized to allow parallelization with AVX, with GPUs, and computer clusters, and to covariance.[3]

Example

[edit]

Assume that all floating point operations use standard IEEE 754 double-precision[broken anchor] arithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both the na?ve algorithm and two-pass algorithm compute these values correctly.

Next consider the sample (108 + 4, 108 + 7, 108 + 13, 108 + 16), which gives rise to the same estimated variance as the first sample. The two-pass algorithm computes this variance estimate correctly, but the na?ve algorithm returns 29.333333333333332 instead of 30.

While this loss of precision may be tolerable and viewed as a minor flaw of the na?ve algorithm, further increasing the offset makes the error catastrophic. Consider the sample (109 + 4, 109 + 7, 109 + 13, 109 + 16). Again the estimated population variance of 30 is computed correctly by the two-pass algorithm, but the na?ve algorithm now computes it as ?170.66666666666666. This is a serious problem with na?ve algorithm and is due to catastrophic cancellation in the subtraction of two similar numbers at the final stage of the algorithm.

Higher-order statistics

[edit]

Terriberry[11] extends Chan's formulae to calculating the third and fourth central moments, needed for example when estimating skewness and kurtosis:

Here the are again the sums of powers of differences from the mean , giving

For the incremental case (i.e., ), this simplifies to:

By preserving the value , only one division operation is needed and the higher-order statistics can thus be calculated for little incremental cost.

An example of the online algorithm for kurtosis implemented as described is:

def online_kurtosis(data):
    n = mean = M2 = M3 = M4 = 0

    for x in data:
        n1 = n
        n = n + 1
        delta = x - mean
        delta_n = delta / n
        delta_n2 = delta_n**2
        term1 = delta * delta_n * n1
        mean = mean + delta_n
        M4 = M4 + term1 * delta_n2 * (n**2 - 3*n + 3) + 6 * delta_n2 * M2 - 4 * delta_n * M3
        M3 = M3 + term1 * delta_n * (n - 2) - 3 * delta_n * M2
        M2 = M2 + term1

    # Note, you may also calculate variance using M2, and skewness using M3
    # Caution: If all the inputs are the same, M2 will be 0, resulting in a division by 0.
    kurtosis = (n * M4) / (M2**2) - 3
    return kurtosis

Péba?[12] further extends these results to arbitrary-order central moments, for the incremental and the pairwise cases, and subsequently Péba? et al.[13] for weighted and compound moments. One can also find there similar formulas for covariance.

Choi and Sweetman[14] offer two alternative methods to compute the skewness and kurtosis, each of which can save substantial computer memory requirements and CPU time in certain applications. The first approach is to compute the statistical moments by separating the data into bins and then computing the moments from the geometry of the resulting histogram, which effectively becomes a one-pass algorithm for higher moments. One benefit is that the statistical moment calculations can be carried out to arbitrary accuracy such that the computations can be tuned to the precision of, e.g., the data storage format or the original measurement hardware. A relative histogram of a random variable can be constructed in the conventional way: the range of potential values is divided into bins and the number of occurrences within each bin are counted and plotted such that the area of each rectangle equals the portion of the sample values within that bin:

where and represent the frequency and the relative frequency at bin and is the total area of the histogram. After this normalization, the raw moments and central moments of can be calculated from the relative histogram:

where the superscript indicates the moments are calculated from the histogram. For constant bin width these two expressions can be simplified using :

The second approach from Choi and Sweetman[14] is an analytical methodology to combine statistical moments from individual segments of a time-history such that the resulting overall moments are those of the complete time-history. This methodology could be used for parallel computation of statistical moments with subsequent combination of those moments, or for combination of statistical moments computed at sequential times.

If sets of statistical moments are known: for , then each can be expressed in terms of the equivalent raw moments:

where is generally taken to be the duration of the time-history, or the number of points if is constant.

The benefit of expressing the statistical moments in terms of is that the sets can be combined by addition, and there is no upper limit on the value of .

where the subscript represents the concatenated time-history or combined . These combined values of can then be inversely transformed into raw moments representing the complete concatenated time-history

Known relationships between the raw moments () and the central moments () are then used to compute the central moments of the concatenated time-history. Finally, the statistical moments of the concatenated history are computed from the central moments:

Covariance

[edit]

Very similar algorithms can be used to compute the covariance.

Na?ve algorithm

[edit]

The na?ve algorithm is

For the algorithm above, one could use the following Python code:

def naive_covariance(data1, data2):
    n = len(data1)
    sum1 = sum(data1)
    sum2 = sum(data2)
    sum12 = sum([i1 * i2 for i1, i2 in zip(data1, data2)])

    covariance = (sum12 - sum1 * sum2 / n) / n
    return covariance

With estimate of the mean

[edit]

As for the variance, the covariance of two random variables is also shift-invariant, so given any two constant values and it can be written:

and again choosing a value inside the range of values will stabilize the formula against catastrophic cancellation as well as make it more robust against big sums. Taking the first value of each data set, the algorithm can be written as:

def shifted_data_covariance(data_x, data_y):
    n = len(data_x)
    if n < 2:
        return 0
    kx = data_x[0]
    ky = data_y[0]
    Ex = Ey = Exy = 0
    for ix, iy in zip(data_x, data_y):
        Ex += ix - kx
        Ey += iy - ky
        Exy += (ix - kx) * (iy - ky)
    return (Exy - Ex * Ey / n) / n

Two-pass

[edit]

The two-pass algorithm first computes the sample means, and then the covariance:

The two-pass algorithm may be written as:

def two_pass_covariance(data1, data2):
    n = len(data1)
    mean1 = sum(data1) / n
    mean2 = sum(data2) / n

    covariance = 0
    for i1, i2 in zip(data1, data2):
        a = i1 - mean1
        b = i2 - mean2
        covariance += a * b / n
    return covariance

A slightly more accurate compensated version performs the full naive algorithm on the residuals. The final sums and should be zero, but the second pass compensates for any small error.

Online

[edit]

A stable one-pass algorithm exists, similar to the online algorithm for computing the variance, that computes co-moment :

The apparent asymmetry in that last equation is due to the fact that , so both update terms are equal to . Even greater accuracy can be achieved by first computing the means, then using the stable one-pass algorithm on the residuals.

Thus the covariance can be computed as

def online_covariance(data1, data2):
    meanx = meany = C = n = 0
    for x, y in zip(data1, data2):
        n += 1
        dx = x - meanx
        meanx += dx / n
        meany += (y - meany) / n
        C += dx * (y - meany)

    population_covar = C / n
    # Bessel's correction for sample variance
    sample_covar = C / (n - 1)

A small modification can also be made to compute the weighted covariance:

def online_weighted_covariance(data1, data2, data3):
    meanx = meany = 0
    wsum = wsum2 = 0
    C = 0
    for x, y, w in zip(data1, data2, data3):
        wsum += w
        wsum2 += w * w
        dx = x - meanx
        meanx += (w / wsum) * dx
        meany += (w / wsum) * (y - meany)
        C += w * dx * (y - meany)

    population_covar = C / wsum
    # Bessel's correction for sample variance
    # Frequency weights
    sample_frequency_covar = C / (wsum - 1)
    # Reliability weights
    sample_reliability_covar = C / (wsum - wsum2 / wsum)

Likewise, there is a formula for combining the covariances of two sets that can be used to parallelize the computation:[3]

Weighted batched version

[edit]

A version of the weighted online algorithm that does batched updated also exists: let denote the weights, and write

The covariance can then be computed as

See also

[edit]

References

[edit]
  1. ^ a b Einarsson, Bo (2005). Accuracy and Reliability in Scientific Computing. SIAM. p. 47. ISBN 978-0-89871-584-2.
  2. ^ a b c Chan, Tony F.; Golub, Gene H.; LeVeque, Randall J. (1983). "Algorithms for computing the sample variance: Analysis and recommendations" (PDF). The American Statistician. 37 (3): 242–247. doi:10.1080/00031305.1983.10483115. JSTOR 2683386. Archived (PDF) from the original on 9 October 2022.
  3. ^ a b c Schubert, Erich; Gertz, Michael (9 July 2018). Numerically stable parallel computation of (co-)variance. ACM. p. 10. doi:10.1145/3221269.3223036. ISBN 9781450365055. S2CID 49665540.
  4. ^ Higham, Nicholas J. (2002). "Problem 1.10". Accuracy and Stability of Numerical Algorithms (2nd ed.). Philadelphia, PA: Society for Industrial and Applied Mathematics. doi:10.1137/1.9780898718027. ISBN 978-0-898715-21-7. eISBN 978-0-89871-802-7, 2002075848. Metadata also listed at ACM Digital Library.
  5. ^ Welford, B. P. (1962). "Note on a method for calculating corrected sums of squares and products". Technometrics. 4 (3): 419–420. doi:10.2307/1266577. JSTOR 1266577.
  6. ^ Donald E. Knuth (1998). The Art of Computer Programming, volume 2: Seminumerical Algorithms, 3rd edn., p. 232. Boston: Addison-Wesley.
  7. ^ Ling, Robert F. (1974). "Comparison of Several Algorithms for Computing Sample Means and Variances". Journal of the American Statistical Association. 69 (348): 859–866. doi:10.2307/2286154. JSTOR 2286154.
  8. ^ Cook, John D. (30 September 2022) [1 November 2014]. "Accurately computing sample variance". John D. Cook Consulting: Expert consulting in applied mathematics & data privacy.
  9. ^ West, D. H. D. (1979). "Updating Mean and Variance Estimates: An Improved Method". Communications of the ACM. 22 (9): 532–535. doi:10.1145/359146.359153. S2CID 30671293.
  10. ^ Chan, Tony F.; Golub, Gene H.; LeVeque, Randall J. (November 1979). "Updating Formulae and a Pairwise Algorithm for Computing Sample Variances" (PDF). Department of Computer Science, Stanford University. Technical Report STAN-CS-79-773, supported in part by Army contract No. DAAGEI-‘E-G-013.
  11. ^ Terriberry, Timothy B. (15 October 2008) [9 December 2007]. "Computing Higher-Order Moments Online". Archived from the original on 23 April 2014. Retrieved 5 May 2008.
  12. ^ Pébay, Philippe Pierre (September 2008). "Formulas for Robust, One-Pass Parallel Computation of Covariances and Arbitrary-Order Statistical Moments". Sponsoring Org.: USDOE. Albuquerque, NM, and Livermore, CA (United States): Sandia National Laboratories (SNL). doi:10.2172/1028931. OSTI 1028931. Technical Report SAND2008-6212, TRN: US201201%%57, DOE Contract Number: AC04-94AL85000 – via UNT Digital Library.
  13. ^ Péba?, Philippe; Terriberry, Timothy; Kolla, Hemanth; Bennett, Janine (2016). "Numerically Stable, Scalable Formulas for Parallel and Online Computation of Higher-Order Multivariate Central Moments with Arbitrary Weights". Computational Statistics. 31 (4). Springer: 1305–1325. doi:10.1007/s00180-015-0637-z. S2CID 124570169.
  14. ^ a b Choi, Myoungkeun; Sweetman, Bert (2010). "Efficient Calculation of Statistical Moments for Structural Health Monitoring". Journal of Structural Health Monitoring. 9 (1): 13–24. doi:10.1177/1475921709341014. S2CID 17534100.
[edit]
什么是三有保护动物 蜂蜡有什么用 什么秀丽 鳄梨是什么水果 烧酒是什么酒
体格检查是什么意思 小寨附近有什么好玩的 男人吃西红柿有什么好处 贱是什么意思 女人为什么要少吃鳝鱼
人参果吃了有什么好处 脖子粗大是什么病的症状 3月4号什么星座 吃什么降血糖最快 伏天是什么意思
鲑鱼是什么鱼 吃完芒果后不能吃什么食物 牙疼什么原因 降血糖的草都有什么草 梦见自己掉头发是什么意思
九转大肠是什么菜系hcv8jop8ns0r.cn 超声心动图检查什么hcv7jop9ns9r.cn 不能人道什么意思hcv7jop5ns6r.cn 高血压什么症状表现hcv8jop7ns2r.cn 生肖狗和什么生肖相冲hcv8jop1ns9r.cn
什么是岩茶hcv9jop0ns1r.cn 产检请假属于什么假hcv7jop9ns7r.cn 8月11号是什么星座hcv9jop2ns6r.cn 为什么会长痘hcv7jop6ns1r.cn 梅毒是什么意思hcv8jop7ns9r.cn
蛇蝎心肠是什么生肖hcv8jop1ns6r.cn 乙肝抗体阴性什么意思hcv8jop5ns0r.cn 挑疳积挑出来的是什么hcv8jop7ns1r.cn 咽拭子是检查什么的xinjiangjialails.com 幸存者偏差是什么意思hlguo.com
做飞机需要注意什么hcv9jop2ns7r.cn 1120是什么星座hcv8jop1ns5r.cn 嗓子疼可以吃什么水果hcv8jop3ns0r.cn 长颈鹿的脖子为什么那么长hcv8jop0ns4r.cn 真菌阳性是什么意思hcv9jop8ns2r.cn
百度