心有余悸是什么意思| 6.26是什么星座| 戊土是什么意思| 鹤膝风是什么病| 肌酸粉有什么作用| 左眉上方有痣代表什么| 荨麻疹吃什么药最有效| 高枕无忧是什么意思| 4月25号是什么星座| 肠炎有什么症状表现| 耳鸣和脑鸣有什么区别| 辛辣是什么意思| 不能生育的女人有什么特征| 肌酸激酶偏低是什么原因| 什么人群不适合吃阿胶糕| 座是什么结构| 什么东西最补心脏| 孕妇吃梨有什么好处| 阴茎出血是什么原因| 什么动物最怕水| 脚趾头抽筋是什么原因引起的| 趋势是什么意思| 考虑黄体是什么意思| 大山羊是什么病| 穿什么衣服好看| 五月十七号是什么星座| 什么电视剧好看| 促黄体生成素低说明什么| 露酒是什么意思| 小孩子黑眼圈重是什么原因| 什么风什么什么| 艾灸是什么东西| 2008年是属什么| 为什么会发生地震| 私奔是什么意思| 咒怨讲的是什么故事| 蛇鼠一窝什么意思| 野生型是什么意思| 米线配菜都有什么| 是什么拼音| 马为什么站着睡觉| 故宫什么时候建的| 茬是什么意思| 时柱比肩是什么意思| 西米露是什么做的| 西安有什么特色美食| 为什么全身酸痛| 五彩斑斓的意思是什么| 眼睛发痒是什么原因| 宇宙之外还有什么| 毛脚女婿是什么意思| 增大淋巴结是什么意思| 肩膀疼应该挂什么科| 胆固醇高是什么| 什么味道| 梦见下雨是什么预兆| 李自成为什么会失败| 细胞角蛋白19片段是什么意思| 高瞻远瞩是什么生肖| 总出虚汗是什么原因| 宫殿是什么意思| 宝宝湿疹用什么药膏| 小金人车标是什么车| 打完狂犬疫苗有什么不良反应| 血糖高能喝什么茶| 刘禹锡是什么朝代的| 怀孕阴道有什么变化| gy是什么意思| 型男是什么意思| 性功能障碍挂什么科| 四月十号是什么星座| ab型血可以给什么血型输血| 多吃青菜有什么好处| 叶酸片有什么功效| 痛风吃什么食物| 诸葛亮是个什么样的人| 酥油茶是什么做的| 趋光性是什么意思| 摩羯座的幸运色是什么| 170是什么号码| 驴打滚是什么意思| 孩子是什么意思| 张字五行属什么| 吃什么补维生素d| 假小子是什么意思| 男人是什么动物| 孕晚期吃什么长胎不长肉| 河豚为什么有毒| 晚上睡觉脚底发热是什么原因| 釜底抽薪是什么意思| 1985属什么| 一个月一个非念什么| 水浒传是什么朝代| 什么人不能喝绿豆汤| 二级乙等医院什么档次| 有机和无机是什么意思| 信阳毛尖属于什么茶| 高密度脂蛋白偏低是什么意思| 为什么小腹隐隐作痛| 鸭子喜欢吃什么食物| 什么展翅| 无家可归是什么生肖| 染色体异常是什么意思| 印度什么教| 骨折后吃什么好| 咯血是什么意思| 邪魅是什么意思| 扁桃体发炎了吃什么药| 螳螂吃什么东西| 白茶有什么功效| 神经性皮炎用什么药膏好| 草莓是什么季节的水果| 绿豆和什么食物相克| 眼睛干涩用什么药水| 阑尾炎属于什么科室| 白内障什么症状| 夏季热是什么病| 徽音是什么意思| 香蕉有什么好处| 拖鞋什么材质的好| 死库水是什么意思| 老人流口水是什么原因引起的| 小孩吃什么补脑更聪明| 做爱女生是什么感觉| 狗为什么怕猫| 巨蟹座女生喜欢什么样的男生| 皮肤过敏用什么药| 1998年出生属什么生肖| 胃窦粘膜慢性炎是什么病| 蒲公英和玫瑰花一起泡有什么功效| 阴部痒什么原因| 猴子喜欢吃什么食物| 哔哩哔哩是什么网站| 瑾字属于五行属什么| 什么时候情人节| 男性感染支原体有什么症状| 狸猫是什么猫| 埋没是什么意思| 00年属什么的| 医保卡是什么样子的图| 子孙满堂是什么生肖| 家属是什么意思| 旖旎什么意思| 阑尾炎是什么引起的| 眼睛经常充血是什么原因引起的| 尿胆红素高是什么原因| 梦到自己流鼻血是什么预兆| 夏天吃什么食物| 宝宝不爱吃饭是什么原因| 为什么高铁没有e座| 吃无花果有什么好处和坏处| 2050年是什么年| 回声欠均匀是什么意思| 乐得什么填词语| 胃炎是什么症状| 泌尿感染吃什么药| 六月二十八是什么日子| 修女是什么意思| 德行是什么意思| 螺旋幽门杆菌吃什么药治疗好| 闺蜜过生日送什么礼物好| 双月刊什么意思| 申时属什么| 心阳虚吃什么药| 女性黄体期是什么时候| fd是什么意思| 老人嗜睡是什么征兆| 内参是什么意思| 荨麻疹是什么原因| 超五行属什么| 三碘甲状腺原氨酸高是什么意思| 襄是什么意思| 酸菜吃多了有什么危害| sd是什么意思| 毕加索全名是什么| graff是什么牌子| 豆绿色是什么颜色| 口腔异味是什么原因引起的| 兰陵为什么改名枣庄| 白细胞低是怎么回事有什么危害| 青年节是什么生肖| 蚊子是什么动物| 三围是什么| 吃盐吃多了有什么危害| 7代表什么意思| 星芒是什么意思| 抽血为什么要空腹| 工字五行属什么| 胃胀呕吐是什么原因| 猪心炖什么补气补血| 南昌有什么好玩的地方| 腿脚肿是什么原因| 七月一号什么星座| 低密度脂蛋白偏高吃什么药| 胃炎适合吃什么食物| cba什么意思| 祖宗是什么意思| 吃饭出虚汗是什么原因| 2018年属什么生肖| 出现的反义词是什么| 脸上长痣是什么原因| 钴蓝色是什么颜色| 大腿麻木是什么原因| 什么地飞翔| 712什么星座| 2月22是什么星座| 肾上腺瘤吃什么药可以消除| 人设是什么意思| 憋屈是什么意思| 熊猫血有什么好处| 刘五行属什么| ot是什么| 皮肤感染吃什么消炎药| 孕妇梦到老公出轨什么意思| 薛字五行属什么| rc是什么| 甘油三酯偏高说明什么问题| 胎盘低是什么原因造成的| igg阳性是什么意思| 左侧卵巢多囊样改变什么意思| tg医学上是什么意思| 38属什么| 排除是什么意思| 心血管疾病做什么检查| 兔和什么相冲| 拍拖什么意思| 白醋和小苏打一起用起什么效果| 捌是什么数字| 三宫六院是什么意思| 安抚奶嘴什么时候开始用| 月底是什么时候| 白带黄用什么药| 卡他症状是什么意思| 咳嗽有白痰一直不好是什么原因| 奇葩什么意思| 老鼠屎长什么样| 头伏饺子二伏面三伏吃什么| 伽马刀是什么意思| 格色是什么意思| 血管堵塞吃什么药好| 毫无意义是什么意思| 疏是什么意思| 无中生有是什么意思| 猫可以吃什么水果| 胆小怕事是什么生肖| 玛尼石是什么意思| 毛遂自荐是什么意思| al是什么意思| 胃潴留是什么病| 今日什么冲什么生肖| 上腹部饱胀是什么原因| 越描越黑是什么意思| 什么地溜达| 梦见玫瑰花是什么预兆| 脂肪瘤是什么引起的| 小孩子坐飞机需要什么证件| 热感冒吃什么药好得快| 什么的太阳| 骨骺是什么意思| 玻璃五行属什么| 9.30号是什么星座| 生吃大蒜有什么好处和坏处| 23是什么生肖| 陈皮治什么病| 百度Jump to content

《南方报道》 广州21批次毛衣产品不合格

From Wikipedia, the free encyclopedia
百度   会议强调,要准确把握军民融合发展战略任务,推进基础设施统筹建设和资源共享、国防科技工业和武器装备发展、军民科技协同创新、军地人才双向培养交流使用、社会服务和军事后勤统筹发展、国防动员现代化建设、新兴领域军民深度融合。

In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a statistic used to measure the ordinal association between two measured quantities. A τ test is a non-parametric hypothesis test for statistical dependence based on the τ coefficient. It is a measure of rank correlation: the similarity of the orderings of the data when ranked by each of the quantities. It is named after Maurice Kendall, who developed it in 1938,[1] though Gustav Fechner had proposed a similar measure in the context of time series in 1897.[2]

Intuitively, the Kendall correlation between two variables will be high when observations have a similar or identical rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar or fully reversed rank between the two variables.

Both Kendall's and Spearman's can be formulated as special cases of a more general correlation coefficient. Its notions of concordance and discordance also appear in other areas of statistics, like the Rand index in cluster analysis.

Definition

[edit]
All points in the gray area are concordant and all points in the white area are discordant with respect to point . With points, there are a total of possible point pairs. In this example there are 395 concordant point pairs and 40 discordant point pairs, leading to a Kendall rank correlation coefficient of 0.816.

Let be a set of observations of the joint random variables X and Y, such that all the values of () and () are unique. (See the section #Accounting for ties for ways of handling non-unique values.) Any pair of observations and , where , are said to be concordant if the sort order of and agrees: that is, if either both and holds or both and ; otherwise they are said to be discordant.

In the absence of ties, the Kendall τ coefficient is defined as:

[3]

for where is the binomial coefficient for the number of ways to choose two items from n items.

The number of discordant pairs is equal to the inversion number that permutes the y-sequence into the same order as the x-sequence.

Properties

[edit]

The denominator is the total number of pair combinations, so the coefficient must be in the range ?1 ≤ τ ≤ 1.

  • If the agreement between the two rankings is perfect (i.e., the two rankings are the same) the coefficient has value 1.
  • If the disagreement between the two rankings is perfect (i.e., one ranking is the reverse of the other) the coefficient has value ?1.
  • If X and Y are independent random variables and not constant, then the expectation of the coefficient is zero.
  • An explicit expression for Kendall's rank coefficient is .

Hypothesis test

[edit]

The Kendall rank coefficient is often used as a test statistic in a statistical hypothesis test to establish whether two variables may be regarded as statistically dependent. This test is non-parametric, as it does not rely on any assumptions on the distributions of X or Y or the distribution of (X,Y).

Under the null hypothesis of independence of X and Y, the sampling distribution of τ has an expected value of zero. The precise distribution cannot be characterized in terms of common distributions, but may be calculated exactly for small samples; for larger samples, it is common to use an approximation to the normal distribution, with mean zero and variance .[4]

Theorem. If the samples are independent, then the variance of is given by .

Proof
Proof
Valz & McLeod (1990;[5] 1995[6])

WLOG, we reorder the data pairs, so that . By assumption of independence, the order of is a permutation sampled uniformly at random from , the permutation group on .

For each permutation, its unique inversion code is such that each is in the range . Sampling a permutation uniformly is equivalent to sampling a -inversion code uniformly, which is equivalent to sampling each uniformly and independently.

Then we have

The first term is just . The second term can be calculated by noting that is a uniform random variable on , so and , then using the sum of squares formula again.

Asymptotic normalityAt the limit, converges in distribution to the standard normal distribution.

Proof

Use a result from A class of statistics with asymptotically normal distribution Hoeffding (1948).[7]

Case of standard normal distributions

[edit]

If are independent and identically distributed samples from the same jointly normal distribution with a known Pearson correlation coefficient , then the expectation of Kendall rank correlation has a closed-form formula.[8]

Greiner's equalityIf are jointly normal, with correlation , then

The name is credited to Richard Greiner (1909)[9] by P. A. P. Moran.[10]

Proof
Proof[11]

Define the following quantities.

  • is a point in .

In the notation, we see that the number of concordant pairs, , is equal to the number of that fall in the subset . That is, .

Thus,

Since each is an independent and identically distributed sample of the jointly normal distribution, the pairing does not matter, so each term in the summation is exactly the same, and so and it remains to calculate the probability. We perform this by repeated affine transforms.

First normalize by subtracting the mean and dividing the standard deviation. This does not change . This gives us where is sampled from the standard normal distribution on .

Thus, where the vector is still distributed as the standard normal distribution on . It remains to perform some unenlightening tedious matrix exponentiations and trigonometry, which can be skipped over.

Thus, iff where the subset on the right is a “squashed” version of two quadrants. Since the standard normal distribution is rotationally symmetric, we need only calculate the angle spanned by each squashed quadrant.

The first quadrant is the sector bounded by the two rays . It is transformed to the sector bounded by the two rays and . They respectively make angle with the horizontal and vertical axis, where

Together, the two transformed quadrants span an angle of , so and therefore

Accounting for ties

[edit]

A pair is said to be tied if and only if or ; a tied pair is neither concordant nor discordant. When tied pairs arise in the data, the coefficient may be modified in a number of ways to keep it in the range [?1, 1]:

Tau-a

[edit]

The Tau statistic defined by Kendall in 1938[1] was retrospectively renamed Tau-a. It represents the strength of positive or negative association of two quantitative or ordinal variables without any adjustment for ties. It is defined as:

where nc, nd and n0 are defined as in the next section.

When ties are present, and, the coefficient can never be equal to +1 or ?1. Even a perfect equality of the two variables (X=Y) leads to a Tau-a < 1.

Tau-b

[edit]

The Tau-b statistic, unlike Tau-a, makes adjustments for ties. This Tau-b was first described by Kendall in 1945 under the name Tau-w[12] as an extension of the original Tau statistic supporting ties. Values of Tau-b range from ?1 (100% negative association, or perfect disagreement) to +1 (100% positive association, or perfect agreement). In case of the absence of association, Tau-b is equal to zero.

The Kendall Tau-b coefficient is defined as :

where

A simple algorithm developed in BASIC computes Tau-b coefficient using an alternative formula.[13]

Be aware that some statistical packages, e.g. SPSS, use alternative formulas for computational efficiency, with double the 'usual' number of concordant and discordant pairs.[14]

Tau-c

[edit]

Tau-c (also called Stuart-Kendall Tau-c)[15] was first defined by Stuart in 1953.[16] Contrary to Tau-b, Tau-c can be equal to +1 or ?1 for non-square (i.e. rectangular) contingency tables,[15][16] i.e. when the underlying scales of both variables have different number of possible values. For instance, if the variable X has a continuous uniform distribution between 0 and 100 and Y is a dichotomous variable equal to 1 if X ≥ 50 and 0 if X < 50, the Tau-c statistic of X and Y is equal to 1 while Tau-b is equal to 0.707. A Tau-c equal to 1 can be interpreted as the best possible positive correlation conditional to marginal distributions while a Tau-b equal to 1 can be interpreted as the perfect positive monotonic correlation where the distribution of X conditional to Y has zero variance and the distribution of Y conditional to X has zero variance so that a bijective function f with f(X)=Y exists.

The Stuart-Kendall Tau-c coefficient is defined as:[16]

where

Significance tests

[edit]

When two quantities are statistically dependent, the distribution of is not easily characterizable in terms of known distributions. However, for the following statistic, , is approximately distributed as a standard normal when the variables are statistically independent:

where .

Thus, to test whether two variables are statistically dependent, one computes , and finds the cumulative probability for a standard normal distribution at . For a 2-tailed test, multiply that number by two to obtain the p-value. If the p-value is below a given significance level, one rejects the null hypothesis (at that significance level) that the quantities are statistically independent.

Numerous adjustments should be added to when accounting for ties. The following statistic, , has the same distribution as the distribution, and is again approximately equal to a standard normal distribution when the quantities are statistically independent:

where

This is sometimes referred to as the Mann-Kendall test.[17]

Algorithms

[edit]

The direct computation of the numerator , involves two nested iterations, as characterized by the following pseudocode:

numer := 0
for i := 2..N do
    for j := 1..(i ? 1) do
        numer := numer + sign(x[i] ? x[j]) × sign(y[i] ? y[j])
return numer

Although quick to implement, this algorithm is in complexity and becomes very slow on large samples. A more sophisticated algorithm[18] built upon the Merge Sort algorithm can be used to compute the numerator in time.

Begin by ordering your data points sorting by the first quantity, , and secondarily (among ties in ) by the second quantity, . With this initial ordering, is not sorted, and the core of the algorithm consists of computing how many steps a Bubble Sort would take to sort this initial . An enhanced Merge Sort algorithm, with complexity, can be applied to compute the number of swaps, , that would be required by a Bubble Sort to sort . Then the numerator for is computed as:

where is computed like and , but with respect to the joint ties in and .

A Merge Sort partitions the data to be sorted, into two roughly equal halves, and , then sorts each half recursively, and then merges the two sorted halves into a fully sorted vector. The number of Bubble Sort swaps is equal to:

where and are the sorted versions of and , and characterizes the Bubble Sort swap-equivalent for a merge operation. is computed as depicted in the following pseudo-code:

function M(L[1..n], R[1..m]) is
    i := 1
    j := 1
    nSwaps := 0
    while i ≤ n and j ≤ m do
        if R[j] < L[i] then
            nSwaps := nSwaps + n ? i + 1
            j := j + 1
        else
            i := i + 1
    return nSwaps

A side effect of the above steps is that you end up with both a sorted version of and a sorted version of . With these, the factors and used to compute are easily obtained in a single linear-time pass through the sorted arrays.

Approximating Kendall rank correlation from a stream

[edit]

Efficient algorithms for calculating the Kendall rank correlation coefficient as per the standard estimator have time complexity. However, these algorithms necessitate the availability of all data to determine observation ranks, posing a challenge in sequential data settings where observations are revealed incrementally. Fortunately, algorithms do exist to estimate the Kendall rank correlation coefficient in sequential settings.[19][20] These algorithms have update time and space complexity, scaling efficiently with the number of observations. Consequently, when processing a batch of observations, the time complexity becomes , while space complexity remains a constant .

The first such algorithm[19] presents an approximation to the Kendall rank correlation coefficient based on coarsening the joint distribution of the random variables. Non-stationary data is treated via a moving window approach. This algorithm[19] is simple and is able to handle discrete random variables along with continuous random variables without modification.

The second algorithm[20] is based on Hermite series estimators and utilizes an alternative estimator for the exact Kendall rank correlation coefficient i.e. for the probability of concordance minus the probability of discordance of pairs of bivariate observations. This alternative estimator also serves as an approximation to the standard estimator. This algorithm[20] is only applicable to continuous random variables, but it has demonstrated superior accuracy and potential speed gains compared to the first algorithm described,[19] along with the capability to handle non-stationary data without relying on sliding windows. An efficient implementation of the Hermite series based approach is contained in the R package package hermiter.[20]

Software implementations

[edit]
  • R implements the test for cor.test(x, y, method = "kendall") in its "stats" package (also cor(x, y, method = "kendall") will work, but the latter does not return the p-value). All three versions of the coefficient are available in the "DescTools" package along with the confidence intervals: KendallTauA(x,y,conf.level=0.95) for , KendallTauB(x,y,conf.level=0.95) for , StuartTauC(x,y,conf.level=0.95) for . Fast batch estimates of the Kendall rank correlation coefficient along with sequential estimates are provided for in the package hermiter.[20]
  • For Python, the SciPy library implements the computation of in scipy.stats.kendalltau
  • In Stata is implemented as ktau varlist.

See also

[edit]

References

[edit]
  1. ^ a b Kendall, M. G. (1938). "A New Measure of Rank Correlation". Biometrika. 30 (1–2): 81–89. doi:10.1093/biomet/30.1-2.81. JSTOR 2332226.
  2. ^ Kruskal, W. H. (1958). "Ordinal Measures of Association". Journal of the American Statistical Association. 53 (284): 814–861. doi:10.2307/2281954. JSTOR 2281954. MR 0100941.
  3. ^ Nelsen, R.B. (2001) [1994], "Kendall tau metric", Encyclopedia of Mathematics, EMS Press
  4. ^ Prokhorov, A.V. (2001) [1994], "Kendall coefficient of rank correlation", Encyclopedia of Mathematics, EMS Press
  5. ^ Valz, Paul D.; McLeod, A. Ian (February 1990). "A Simplified Derivation of the Variance of Kendall's Rank Correlation Coefficient". The American Statistician. 44 (1): 39–40. doi:10.1080/00031305.1990.10475691. ISSN 0003-1305.
  6. ^ Valz, Paul D.; McLeod, A. Ian; Thompson, Mary E. (February 1995). "Cumulant Generating Function and Tail Probability Approximations for Kendall's Score with Tied Rankings". The Annals of Statistics. 23 (1): 144–160. doi:10.1214/aos/1176324460. ISSN 0090-5364.
  7. ^ Hoeffding, Wassily (1992), Kotz, Samuel; Johnson, Norman L. (eds.), "A Class of Statistics with Asymptotically Normal Distribution", Breakthroughs in Statistics: Foundations and Basic Theory, Springer Series in Statistics, New York, NY: Springer, pp. 308–334, doi:10.1007/978-1-4612-0919-5_20, ISBN 978-1-4612-0919-5, retrieved 2025-08-06
  8. ^ Kendall, M. G. (1949). "Rank and Product-Moment Correlation". Biometrika. 36 (1/2): 177–193. doi:10.2307/2332540. ISSN 0006-3444. JSTOR 2332540. PMID 18132091.
  9. ^ Richard Greiner, (1909), Ueber das Fehlersystem der Kollektiv-ma?lehre, Zeitschrift für Mathematik und Physik, Band 57, B. G. Teubner, Leipzig, pages 121-158, 225-260, 337-373.
  10. ^ Moran, P. A. P. (1948). "Rank Correlation and Product-Moment Correlation". Biometrika. 35 (1/2): 203–206. doi:10.2307/2332641. ISSN 0006-3444. JSTOR 2332641. PMID 18867425.
  11. ^ Berger, Daniel (2016). "A Proof of Greiner's Equality". SSRN Electronic Journal. doi:10.2139/ssrn.2830471. ISSN 1556-5068.
  12. ^ Kendall, M. G. (1945). "The Treatment of Ties in Ranking Problems". Biometrika. 33 (3): 239–251. doi:10.2307/2332303. PMID 21006841. Retrieved 12 November 2024.
  13. ^ Alfred Brophy (1986). "An algorithm and program for calculation of Kendall's rank correlation coefficient" (PDF). Behavior Research Methods, Instruments, & Computers. 18: 45–46. doi:10.3758/BF03200993. S2CID 62601552.
  14. ^ IBM (2016). IBM SPSS Statistics 24 Algorithms. IBM. p. 168. Retrieved 31 August 2017.
  15. ^ a b Berry, K. J.; Johnston, J. E.; Zahran, S.; Mielke, P. W. (2009). "Stuart's tau measure of effect size for ordinal variables: Some methodological considerations". Behavior Research Methods. 41 (4): 1144–1148. doi:10.3758/brm.41.4.1144. PMID 19897822.
  16. ^ a b c Stuart, A. (1953). "The Estimation and Comparison of Strengths of Association in Contingency Tables". Biometrika. 40 (1–2): 105–110. doi:10.2307/2333101. JSTOR 2333101.
  17. ^ Valz, Paul D.; McLeod, A. Ian; Thompson, Mary E. (February 1995). "Cumulant Generating Function and Tail Probability Approximations for Kendall's Score with Tied Rankings". The Annals of Statistics. 23 (1): 144–160. doi:10.1214/aos/1176324460. ISSN 0090-5364.
  18. ^ Knight, W. (1966). "A Computer Method for Calculating Kendall's Tau with Ungrouped Data". Journal of the American Statistical Association. 61 (314): 436–439. doi:10.2307/2282833. JSTOR 2282833.
  19. ^ a b c d Xiao, W. (2019). "Novel Online Algorithms for Nonparametric Correlations with Application to Analyze Sensor Data". 2019 IEEE International Conference on Big Data (Big Data). pp. 404–412. doi:10.1109/BigData47090.2019.9006483. ISBN 978-1-7281-0858-2. S2CID 211298570.
  20. ^ a b c d e Stephanou, M. and Varughese, M (2023). "Hermiter: R package for sequential nonparametric estimation". Computational Statistics. arXiv:2111.14091. doi:10.1007/s00180-023-01382-0. S2CID 244715035.{{cite journal}}: CS1 maint: multiple names: authors list (link)

Further reading

[edit]
[edit]
饿了胃疼是什么原因 女人的胸部长什么样 黄色配什么颜色最搭 女性盆腔炎什么症状 再说吧是什么意思
手指头麻是什么原因引起的 nk是什么 现在去贵州穿什么衣服 巨人观是什么意思 双肾尿盐结晶是什么意思
一什么招牌 牙痛挂什么科 经络是什么意思 高级别上皮内瘤变是什么意思 治妇科炎症用什么药好
航班预警是什么意思 脂肪是什么颜色 游戏bp是什么意思 山东简称是什么 一个土一个阜念什么
儿童个子矮小看什么科hcv7jop5ns2r.cn 25岁属什么hcv9jop8ns3r.cn hpv是什么病严重吗hcv8jop3ns7r.cn 脚背痛什么原因引起的hcv9jop2ns7r.cn 喝豆浆拉肚子什么原因hcv8jop9ns1r.cn
睡觉腿抽筋是什么原因hcv8jop7ns1r.cn 梦见抓鸟是什么征兆hcv7jop6ns9r.cn 1989年属什么的hcv7jop6ns7r.cn 青云志3什么时候上映hcv9jop6ns5r.cn 喉部有异物感是什么病hcv9jop7ns0r.cn
holly是什么意思imcecn.com 相逢是什么意思kuyehao.com 两眼中间的位置叫什么sscsqa.com 导演是干什么的zsyouku.com 什么是点映hcv9jop4ns9r.cn
釜底抽薪是什么计hcv7jop6ns5r.cn 鱼油有什么好处hcv7jop5ns6r.cn 腿发热是什么原因引起的mmeoe.com 衣服发黄是什么原因hcv8jop4ns0r.cn 天河水命是什么意思hcv8jop1ns9r.cn
百度