做梦梦到钱是什么预兆| 睡意是什么意思| 霉菌性阴道炎是什么原因引起的| 办健康证挂什么科| 既寿永昌什么意思| 什么是提供情绪价值| 孕妇吃鸡蛋对胎儿有什么好处| 打鼾是什么原因导致的| 入驻是什么意思| 阳寿是什么意思| 风热感冒吃什么药好| 什么东西| scarves什么意思| mmhg是什么单位| 头发黄是什么原因| 贱货是什么意思| 独生子女证有什么用| 建制派是什么意思| 炼蜜是什么| 拔火罐起水泡是什么原因| 男人喝劲酒有什么好处| 橄榄绿是什么颜色| 育字五行属什么| 葡萄像什么比喻句| 粉色象征着什么| 处女座和什么星座最配| 窦性心动过速是什么原因| 男人遗精是什么原因造成的| 全身浮肿是什么病| 蓝柑是什么水果| 男性睾丸疼痛什么原因| ad是什么病的简称| 哮喘吃什么药最好| 汗管瘤什么原因造成| 全身大面积湿疹暗示着什么| 仪表堂堂是什么生肖| m代表什么意思| 女性尿路感染吃什么药效果好| 筛窦炎吃什么药| 什么样的夕阳| 心慌吃什么药效果好| 易孕体质有什么特征| 羡字五行属什么| 为什么来月经会头疼| 1980年属什么| 碱性磷酸酶低是什么原因| 冬字五行属什么| 申酉是什么时间| 12月11日什么星座| 执子之手与子偕老是什么意思| 受凉感冒吃什么药| 痤疮长什么样| human什么意思| AMI是什么病| 腰子是什么| 恐龙什么时候灭绝的| qq邮箱的格式是什么| 血压低吃什么水果| 献血对身体有什么好处| 宫腔镜是检查什么的| 康桑密达是什么意思| 世界上最大的生物是什么| 苹果绿是什么颜色| badus是什么牌子的手表| 妈妈的姐姐应该叫什么| 黄瓜籽粉有什么作用| 怀孕了吃什么药可以打掉| 李耳为什么叫老子| 高血压会引起什么并发症| 脚底发热是什么原因| 起鸡皮疙瘩是什么原因| 苦荞是什么| 佛手瓜什么时候结果| 口臭口苦吃什么药最好| 身上发痒是什么原因| 老流鼻血是什么原因| psv医学是什么意思| 鲜字五行属什么| 腹部淋巴结肿大是什么原因| 血糖高有什么表现| 50至60岁吃什么钙片好| 湿气用什么药最好最快| 北京立冬吃什么| 7月8号是什么星座的| 刘邦和刘备是什么关系| ras医学上是什么意思| 交通运输是干什么的| 维纳斯是什么意思| 冬瓜什么季节吃最好| 莅临什么意思| 阿斗是什么意思| 总胆固醇偏高吃什么药| 巧克力是什么材料做的| 小囡是什么意思| 李莫愁的徒弟叫什么| 梦见捡鸡蛋是什么预兆| 氧分压低是什么原因| 淋巴结是什么病严重吗| 倒挂金钩什么意思| 脑供血不足用什么药效果最好| 粉瘤不切除有什么危害| 江西有什么好玩的| 肌酐高用什么药| 手指疼挂什么科| 运是什么意思| 眩晕挂什么科室| dell是什么牌子的电脑| 什么叫唐氏综合症| 取环后吃什么恢复子宫| 右侧中耳乳突炎是什么意思| 缺钾吃什么好| 平均血小板体积偏低是什么意思| 梦到大牙掉了一颗是什么意思| 软组织肿胀是什么意思| 小孩睡觉趴着睡是什么原因| 拉肚子吃什么食物| 三点水及念什么| 陈五行属什么| 手指头麻木吃什么药| 膀胱结石是什么症状| 阴道炎用什么药好| 血红蛋白是指什么| 发热吃什么药| 烟雾病是什么原因引起的| 蛟龙是什么| 科伦是什么药| 全套半套什么意思| 红黑相间的蛇是什么蛇| 电解质饮料有什么作用| 口差念什么| 泡热水脚有什么好处| 缺锌吃什么食物| 韩语思密达是什么意思| 6.8是什么星座| 尿泡多是什么原因| 青梅是什么意思| 产后什么时候来月经正常| 吃什么治疗阳痿| 尿路感染是什么原因引起的| 开团什么意思| 牙龈上火肿痛吃什么药| 为什么兔子的眼睛是红色的| 什么是精神分裂症| 我要的是什么| 兔子的眼睛是什么颜色| 为什么要拔掉智齿| 蝉蛹是什么| 精液是什么形成的| 凤梨是什么| 苋菜与什么食物相克| 彧读什么| 金字旁目字读什么| 破执是什么意思| 西红柿炒什么好吃| 帆状胎盘是什么意思| 淋巴结肿大看什么科| 接骨木莓是什么| 跳楼是什么感觉| 十岁女孩喜欢什么礼物| 背痛是什么原因| 暗网里面有什么| 循环利息是什么意思| 什么是豹子号| 圣经是什么| 孕妇吃红枣对胎儿有什么好处| 什么叫前列腺| 支原体弱阳性是什么意思| 放屁多什么原因| 碘过量会导致什么疾病| 中意你是什么意思| 什么糖最甜| 梦见孕妇大肚子是什么意思| 心梗挂什么科| 迪丽热巴颜值什么水平| 外感风寒吃什么药| 4月23日什么星座| 血小板分布宽度偏低是什么意思| 眼睛充血用什么眼药水最好| 磨玻璃影是什么意思| 月经血是什么血| 什么血糖仪准确度高| 心脏不好吃什么药最好| 子宫肌瘤是什么| 撒贝宁是什么民族| 国家栋梁指的是什么官| 有腿毛的男人说明什么| 胸膈痞闷是什么症状| 什么人容易得妄想症| 脸肿是什么原因| 拔牙挂什么科| 甲状腺结节什么症状| 在什么后面| 什么时候会有孕吐反应| 肝fnh是什么病| svip是什么意思| 光绪是慈禧的什么人| 生育津贴是什么| 刚愎自负是什么意思| 尿管型偏高是什么原因| 为什么小孩子有白头发| 黑死病是什么| 7月23日什么星座| 吃芒果过敏是什么症状| 夸父为什么要追赶太阳| 什么时候去西藏旅游最好| 中医湿气重是什么意思| 蜂蜜不能和什么食物一起吃| 肺部钙化灶是什么意思| 鼻炎用什么药| 严重失眠吃什么药管用| 左眼一直跳有什么预兆| 大天真香是什么意思| 托帕石是什么宝石| 12月8号是什么星座| 一百岁叫什么之年| 脘痞什么意思| 腺样体肥大吃什么药| 胎儿腿短是什么原因| 乐不思蜀是什么意思| 小猫吃什么食物| 社交恐惧是什么| 车震是什么意思啊| 情人眼里出西施是什么心理效应| 什么原因引起血糖高| 失眠用什么药| 乳房胀痛是什么原因引起的| 心计是什么意思| 梦见自己头发长长了是什么意思| 大姨妈来吃什么水果好| 俊俏什么意思| 间歇是什么意思| 浅绿色配什么颜色好看| 梦见和死人说话是什么意思| 减肥吃什么好| 八字五行属什么| 姓袁女孩叫什么名字好听| 赤茯苓又叫什么| 新生儿吐奶是什么原因| 小腿酸胀是什么原因| qcy是什么牌子| 澳门有什么好玩的地方| 健身rm是什么意思| 苏州为什么叫姑苏| 冗长什么意思| 轶事是什么意思| 羊五行属什么| 血糖高吃什么水果降糖| 手背发黄是什么原因| 三是什么意思| 真丝乔其纱是什么面料| trace是什么意思| paul是什么意思| 新生儿黄疸高有什么危害| 方向盘重是什么原因| 什么是砭石| 上火为什么会牙疼| 什么样的山| olay是什么品牌| 什么什么不安| 近视吃什么改善视力| 右脚踝肿是什么原因引起的| au990是什么金| 吃完龙虾不能吃什么| 三个土读什么| 百度Jump to content

不什么其什么

From Wikipedia, the free encyclopedia
百度 ”  薛宝军、黑志刚等医护人员不敢耽搁,在历时1小时39分后手术顺利完成。

Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution.[1] Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. In machine learning, the term inference is sometimes used instead to mean "make a prediction, by evaluating an already trained model";[2] in this context inferring properties of the model is referred to as training or learning (rather than inference), and using a model for prediction is referred to as inference (instead of prediction); see also predictive inference.

Introduction

[edit]

Statistical inference makes propositions about a population, using data drawn from the population with some form of sampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the model.[3]

Konishi and Kitagawa state "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling".[4] Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".[5]

The conclusion of a statistical inference is a statistical proposition.[6] Some common forms of statistical proposition are the following:

  • a point estimate, i.e. a particular value that best approximates some parameter of interest;
  • an interval estimate, e.g. a confidence interval (or set estimate). A confidence interval is an interval constructed using data from a sample, such that if the procedure were repeated over many independent samples (mathematically, by taking the limit), a fixed proportion (e.g., 95% for a 95% confidence interval) of the resulting intervals would contain the true value of the parameter, i.e., the population parameter;
  • a credible interval, i.e. a set of values containing, for example, 95% of posterior belief;
  • rejection of a hypothesis;[note 1]
  • clustering or classification of data points into groups.

Models and assumptions

[edit]

Any statistical inference requires some assumptions. A statistical model is a set of assumptions concerning the generation of the observed data and similar data. Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference.[7] Descriptive statistics are typically used as a preliminary step before more formal inferences are drawn.[8]

Degree of models/assumptions

[edit]

Statisticians distinguish between three levels of modeling assumptions:

  • Fully parametric: The probability distributions describing the data-generation process are assumed to be fully described by a family of probability distributions involving only a finite number of unknown parameters.[7] For example, one may assume that the distribution of population values is truly Normal, with unknown mean and variance, and that datasets are generated by 'simple' random sampling. The family of generalized linear models is a widely used and flexible class of parametric models.
  • Non-parametric: The assumptions made about the process generating the data are much less than in parametric statistics and may be minimal.[9] For example, every continuous probability distribution has a median, which may be estimated using the sample median or the Hodges–Lehmann–Sen estimator, which has good properties when the data arise from simple random sampling.
  • Semi-parametric: This term typically implies assumptions 'in between' fully and non-parametric approaches. For example, one may assume that a population distribution has a finite mean. Furthermore, one may assume that the mean response level in the population depends in a truly linear manner on some covariate (a parametric assumption) but not make any parametric assumption describing the variance around that mean (i.e. about the presence or possible form of any heteroscedasticity). More generally, semi-parametric models can often be separated into 'structural' and 'random variation' components. One component is treated parametrically and the other non-parametrically. The well-known Cox model is a set of semi-parametric assumptions.[citation needed]

Importance of valid models/assumptions

[edit]
The above image shows a histogram assessing the assumption of normality, which can be illustrated through the even spread underneath the bell curve.

Whatever level of assumption is made, correctly calibrated inference, in general, requires these assumptions to be correct; i.e. that the data-generating mechanisms really have been correctly specified.

Incorrect assumptions of 'simple' random sampling can invalidate statistical inference.[10] More complex semi- and fully parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases lead to faulty conclusions.[11] Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference.[12] The use of any parametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal."[13] In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population."[13] Here, the central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally distributed, if the distribution is not heavy-tailed.

Approximate distributions

[edit]

Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for approximating these.

With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample distribution: For example, with 10,000 independent samples the normal distribution approximates (to two digits of accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem.[14] Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience.[14] Following Kolmogorov's work in the 1950s, advanced statistics uses approximation theory and functional analysis to quantify the error of approximation. In this approach, the metric geometry of probability distributions is studied; this approach quantifies approximation error with, for example, the Kullback–Leibler divergence, Bregman divergence, and the Hellinger distance.[15][16][17]

With indefinitely large samples, limiting results like the central limit theorem describe the sample statistic's limiting distribution if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples.[18][19][20] However, the asymptotic theory of limiting distributions is often invoked for work with finite samples. For example, limiting results are often invoked to justify the generalized method of moments and the use of generalized estimating equations, which are popular in econometrics and biostatistics. The magnitude of the difference between the limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using simulation.[21] The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensional models with log-concave likelihoods (such as with one-parameter exponential families).

Randomization-based models

[edit]

For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design. In frequentist inference, the randomization allows inferences to be based on the randomization distribution rather than a subjective model, and this is important especially in survey sampling and design of experiments.[22][23] Statistical inference from randomized studies is also more straightforward than many other situations.[24][25][26] In Bayesian inference, randomization is also of importance: in survey sampling, use of sampling without replacement ensures the exchangeability of the sample with the population; in randomized experiments, randomization warrants a missing at random assumption for covariate information.[27]

Objective randomization allows properly inductive procedures.[28][29][30][31][32] Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures.[33] (However, it is true that in fields of science with developed theoretical knowledge and experimental control, randomized experiments may increase the costs of experimentation without improving the quality of inferences.[34][35]) Similarly, results from randomized experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena.[36] However, a good observational study may be better than a bad randomized experiment.

The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental protocol and does not need a subjective model.[37][38]

However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical.

Model-based analysis of randomized experiments

[edit]

It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments.[39] However, the randomization scheme guides the choice of a statistical model. It is not possible to choose an appropriate model without knowing the randomization scheme.[23] Seriously misleading results can be obtained analyzing data from randomized experiments while ignoring the experimental protocol; common mistakes include forgetting the blocking used in an experiment and confusing repeated measurements on the same experimental unit with independent replicates of the treatment applied to different experimental units.[40]

Model-free randomization inference

[edit]

Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations.[41][42]

For example, model-free simple linear regression is based either on:

  • a random design, where the pairs of observations are independent and identically distributed (iid),
  • or a deterministic design, where the variables are deterministic, but the corresponding response variables are random and independent with a common conditional distribution, i.e., , which is independent of the index .

In either case, the model-free randomization inference for features of the common conditional distribution relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population feature conditional mean, , can be consistently estimated via local averaging or local polynomial fitting, under the assumption that is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, the conditional mean, .[43]

Paradigms for inference

[edit]

Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms.

Bandyopadhyay and Forster describe four paradigms: The classical (or frequentist) paradigm, the Bayesian paradigm, the likelihoodist paradigm, and the Akaikean-Information Criterion-based paradigm.[44]

Frequentist inference

[edit]

This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging.

Examples of frequentist inference

[edit]

Frequentist inference, objectivity, and decision theory

[edit]

One interpretation of frequentist inference (or classical inference) is that it is applicable only in terms of frequency probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman[45] develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach.

The frequentist procedures of significance testing and confidence intervals can be constructed without regard to utility functions. However, some elements of frequentist statistics, such as statistical decision theory, do incorporate utility functions.[citation needed] In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators, or uniformly most powerful testing) make use of loss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property.[46] However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal under absolute value loss functions, in that they minimize expected loss, and least squares estimators are optimal under squared error loss functions, in that they minimize expected loss.

While statisticians using frequentist inference must choose for themselves the parameters of interest, and the estimators/test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'.[47]

Bayesian inference

[edit]

The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions.[48] There are several different justifications for using the Bayesian approach.

Examples of Bayesian inference

[edit]

Bayesian inference, subjectivity and decision theory

[edit]

Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user's utility function need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have been proposed but not yet fully developed.)

Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically) incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be coherent. Some advocates of Bayesian inference assert that inference must take place in this decision-theoretic framework, and that Bayesian inference should not conclude with the evaluation and summarization of posterior beliefs.

Likelihood-based inference

[edit]

Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data. Likelihoodism approaches statistics by using the likelihood function, denoted as , quantifies the probability of observing the given data , assuming a specific set of parameter values . In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data.

The process of likelihood-based inference usually involves the following steps:

  1. Formulating the statistical model: A statistical model is defined based on the problem at hand, specifying the distributional assumptions and the relationship between the observed data and the unknown parameters. The model can be simple, such as a normal distribution with known variance, or complex, such as a hierarchical model with multiple levels of random effects.
  2. Constructing the likelihood function: Given the statistical model, the likelihood function is constructed by evaluating the joint probability density or mass function of the observed data as a function of the unknown parameters. This function represents the probability of observing the data for different values of the parameters.
  3. Maximizing the likelihood function: The next step is to find the set of parameter values that maximizes the likelihood function. This can be achieved using optimization techniques such as numerical optimization algorithms. The estimated parameter values, often denoted as , are the maximum likelihood estimates (MLEs).
  4. Assessing uncertainty: Once the MLEs are obtained, it is crucial to quantify the uncertainty associated with the parameter estimates. This can be done by calculating standard errors, confidence intervals, or conducting hypothesis tests based on asymptotic theory or simulation techniques such as bootstrapping.
  5. Model checking: After obtaining the parameter estimates and assessing their uncertainty, it is important to assess the adequacy of the statistical model. This involves checking the assumptions made in the model and evaluating the fit of the model to the data using goodness-of-fit tests, residual analysis, or graphical diagnostics.
  6. Inference and interpretation: Finally, based on the estimated parameters and model assessment, statistical inference can be performed. This involves drawing conclusions about the population parameters, making predictions, or testing hypotheses based on the estimated model.

AIC-based inference

[edit]

The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for model selection.

AIC is founded on information theory: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between the goodness of fit of the model and the simplicity of the model.)

Other paradigms for inference

[edit]

Minimum description length

[edit]

The minimum description length (MDL) principle has been developed from ideas in information theory[49] and the theory of Kolmogorov complexity.[50] The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches.

However, if a "data generating mechanism" does exist in reality, then according to Shannon's source coding theorem it provides the MDL description of the data, on average and asymptotically.[51] In minimizing description length (or descriptive complexity), MDL estimation is similar to maximum likelihood estimation and maximum a posteriori estimation (using maximum-entropy Bayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling.[51][52]

The MDL principle has been applied in communication-coding theory in information theory, in linear regression,[52] and in data mining.[50]

The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity theory.[53]

Fiducial inference

[edit]

Fiducial inference was an approach to statistical inference based on fiducial probability, also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious.[54][55] However this argument is the same as that which shows[56] that a so-called confidence distribution is not a valid probability distribution and, since this has not invalidated the application of confidence intervals, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's fiducial argument as a special case of an inference theory using upper and lower probabilities.[57]

Structural inference

[edit]

Developing ideas of Fisher and of Pitman from 1938 to 1939,[58] George A. Barnard developed "structural inference" or "pivotal inference",[59] an approach using invariant probabilities on group families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful. Donald A. S. Fraser developed a general theory for structural inference[60] based on group theory and applied this to linear models.[61] The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist.[62]

Inference topics

[edit]

The topics below are usually included in the area of statistical inference.

  1. Statistical assumptions
  2. Statistical decision theory
  3. Estimation theory
  4. Statistical hypothesis testing
  5. Revising opinions in statistics
  6. Design of experiments, the analysis of variance, and regression
  7. Survey sampling
  8. Summarizing statistical data

Predictive inference

[edit]

Predictive inference is an approach to statistical inference that emphasizes the prediction of future observations based on past observations.

Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability,[citation needed] but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti. The approach modeled phenomena as a physical system observed with error (e.g., celestial mechanics). De Finetti's idea of exchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper,[63] and has since been propounded by such statisticians as Seymour Geisser.[64]

See also

[edit]

Notes

[edit]
  1. ^ According to Peirce, acceptance means that inquiry on this question ceases for the time being. In science, all scientific theories are revisable.

References

[edit]

Citations

[edit]
  1. ^ Upton, G., Cook, I. (2008) Oxford Dictionary of Statistics, OUP. ISBN 978-0-19-954145-4.
  2. ^ "TensorFlow Lite inference". The term inference refers to the process of executing a TensorFlow Lite model on-device in order to make predictions based on input data.
  3. ^ Johnson, Richard (12 March 2016). "Statistical Inference". Encyclopedia of Mathematics. Springer: The European Mathematical Society. Retrieved 26 October 2022.
  4. ^ Konishi & Kitagawa (2008), p. 75.
  5. ^ Cox (2006), p. 197.
  6. ^ "Statistical inference - Encyclopedia of Mathematics". www.encyclopediaofmath.org. Retrieved 2025-08-06.
  7. ^ a b Cox (2006) page 2
  8. ^ Evans, Michael; et al. (2004). Probability and Statistics: The Science of Uncertainty. Freeman and Company. p. 267. ISBN 9780716747420.
  9. ^ van der Vaart, A.W. (1998) Asymptotic Statistics Cambridge University Press. ISBN 0-521-78450-6 (page 341)
  10. ^ Kruskal 1988
  11. ^ Freedman, D.A. (2008) "Survival analysis: An Epidemiological hazard?". The American Statistician (2008) 62: 110-119. (Reprinted as Chapter 11 (pages 169–192) of Freedman (2010)).
  12. ^ Berk, R. (2003) Regression Analysis: A Constructive Critique (Advanced Quantitative Techniques in the Social Sciences) (v. 11) Sage Publications. ISBN 0-7619-2904-5
  13. ^ a b Brewer, Ken (2002). Combined Survey Sampling Inference: Weighing of Basu's Elephants. Hodder Arnold. p. 6. ISBN 978-0340692295.
  14. ^ a b J?rgen Hoffman-J?rgensen's Probability With a View Towards Statistics, Volume I. Page 399 [full citation needed]
  15. ^ Le Cam (1986) [page needed]
  16. ^ Erik Torgerson (1991) Comparison of Statistical Experiments, volume 36 of Encyclopedia of Mathematics. Cambridge University Press. [full citation needed]
  17. ^ Liese, Friedrich & Miescke, Klaus-J. (2008). Statistical Decision Theory: Estimation, Testing, and Selection. Springer. ISBN 978-0-387-73193-3.
  18. ^ Kolmogorov (1963, p.369): "The frequency concept, based on the notion of limiting frequency as the number of trials increases to infinity, does not contribute anything to substantiate the applicability of the results of probability theory to real practical problems where we have always to deal with a finite number of trials".
  19. ^ "Indeed, limit theorems 'as  tends to infinity' are logically devoid of content about what happens at any particular . All they can do is suggest certain approaches whose performance must then be checked on the case at hand." — Le Cam (1986) (page xiv)
  20. ^ Pfanzagl (1994): "The crucial drawback of asymptotic theory: What we expect from asymptotic theory are results which hold approximately . . . . What asymptotic theory has to offer are limit theorems."(page ix) "What counts for applications are approximations, not limits." (page 188)
  21. ^ Pfanzagl (1994) : "By taking a limit theorem as being approximately true for large sample sizes, we commit an error the size of which is unknown. [. . .] Realistic information about the remaining errors may be obtained by simulations." (page ix)
  22. ^ Neyman, J.(1934) "On the two different aspects of the representative method: The method of stratified sampling and the method of purposive selection", Journal of the Royal Statistical Society, 97 (4), 557–625 JSTOR 2342192
  23. ^ a b Hinkelmann and Kempthorne(2008) [page needed]
  24. ^ ASA Guidelines for the first course in statistics for non-statisticians. (available at the ASA website)
  25. ^ David A. Freedman et alia's Statistics.
  26. ^ Moore et al. (2015).
  27. ^ Gelman A. et al. (2013). Bayesian Data Analysis (Chapman & Hall).
  28. ^ Peirce (1877-1878)
  29. ^ Peirce (1883)
  30. ^ Freedman, Pisani & Purves 1978.
  31. ^ David A. Freedman Statistical Models.
  32. ^ Rao, C.R. (1997) Statistics and Truth: Putting Chance to Work, World Scientific. ISBN 981-02-3111-3
  33. ^ Peirce; Freedman; Moore et al. (2015).[citation needed]
  34. ^ Box, G.E.P. and Friends (2006) Improving Almost Anything: Ideas and Essays, Revised Edition, Wiley. ISBN 978-0-471-72755-2
  35. ^ Cox (2006), p. 196.
  36. ^ ASA Guidelines for the first course in statistics for non-statisticians. (available at the ASA website)
    • David A. Freedman et alias Statistics.
    • Moore et al. (2015).
  37. ^ Neyman, Jerzy. 1923 [1990]. "On the Application of Probability Theory to AgriculturalExperiments. Essay on Principles. Section 9." Statistical Science 5 (4): 465–472. Trans. Dorota M. Dabrowska and Terence P. Speed.
  38. ^ Hinkelmann & Kempthorne (2008) [page needed]
  39. ^ Dinov, Ivo; Palanimalai, Selvam; Khare, Ashwini; Christou, Nicolas (2018). "Randomization-based statistical inference: A resampling and simulation infrastructure". Teaching Statistics. 40 (2): 64–73. doi:10.1111/test.12156. PMC 6155997. PMID 30270947.
  40. ^ Hinkelmann and Kempthorne (2008) Chapter 6.
  41. ^ Dinov, Ivo; Palanimalai, Selvam; Khare, Ashwini; Christou, Nicolas (2018). "Randomization-based statistical inference: A resampling and simulation infrastructure". Teaching Statistics. 40 (2): 64–73. doi:10.1111/test.12156. PMC 6155997. PMID 30270947.
  42. ^ Tang, Ming; Gao, Chao; Goutman, Stephen; Kalinin, Alexandr; Mukherjee, Bhramar; Guan, Yuanfang; Dinov, Ivo (2019). "Model-Based and Model-Free Techniques for Amyotrophic Lateral Sclerosis Diagnostic Prediction and Patient Clustering". Neuroinformatics. 17 (3): 407–421. doi:10.1007/s12021-018-9406-9. PMC 6527505. PMID 30460455.
  43. ^ Politis, D.N. (2019). "Model-free inference in statistics: how and why". IMS Bulletin. 48.
  44. ^ Bandyopadhyay & Forster (2011). See the book's Introduction (p.3) and "Section III: Four Paradigms of Statistics".
  45. ^ Neyman, J. (1937). "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability". Philosophical Transactions of the Royal Society of London A. 236 (767): 333–380. Bibcode:1937RSPTA.236..333N. doi:10.1098/rsta.1937.0005. JSTOR 91337.
  46. ^ Preface to Pfanzagl.
  47. ^ Little, Roderick J. (2006). "Calibrated Bayes: A Bayes/Frequentist Roadmap". The American Statistician. 60 (3): 213–223. doi:10.1198/000313006X117837. ISSN 0003-1305. JSTOR 27643780. S2CID 53505632.
  48. ^ Lee, Se Yoon (2021). "Gibbs sampler and coordinate ascent variational inference: A set-theoretical review". Communications in Statistics - Theory and Methods. 51 (6): 1549–1568. arXiv:2008.01006. doi:10.1080/03610926.2021.1921214. S2CID 220935477.
  49. ^ Soofi (2000)
  50. ^ a b Hansen & Yu (2001)
  51. ^ a b Hansen and Yu (2001), page 747.
  52. ^ a b Rissanen (1989), page 84
  53. ^ Joseph F. Traub, G. W. Wasilkowski, and H. Wozniakowski. (1988) [page needed]
  54. ^ Neyman (1956)
  55. ^ Zabell (1992)
  56. ^ Cox (2006) page 66
  57. ^ Hampel 2003.
  58. ^ Davison, page 12. [full citation needed]
  59. ^ Barnard, G.A. (1995) "Pivotal Models and the Fiducial Argument", International Statistical Review, 63 (3), 309–323. JSTOR 1403482
  60. ^ Fraser, D. A. S. (1968). The structure of inference. New York: Wiley. ISBN 0-471-27548-4. OCLC 440926.
  61. ^ Fraser, D. A. S. (1979). Inference and linear models. London: McGraw-Hill. ISBN 0-07-021910-9. OCLC 3559629.
  62. ^ Taraldsen, Gunnar; Lindqvist, Bo Henry (2025-08-06). "Fiducial theory and optimal inference". The Annals of Statistics. 41 (1). arXiv:1301.1717. doi:10.1214/13-AOS1083. ISSN 0090-5364. S2CID 88520957.
  63. ^ De Finetti, Bruno (1937). "La Prévision: ses lois logiques, ses sources subjectives". Annales de l'Institut Henri Poincaré. 7 (1): 1–68. ISSN 0365-320X. Translated in De Finetti, Bruno (1992). "Foresight: Its Logical Laws, Its Subjective Sources". Breakthroughs in Statistics. Springer Series in Statistics. pp. 134–174. doi:10.1007/978-1-4612-0919-5_10. ISBN 978-0-387-94037-3.
  64. ^ Geisser, Seymour (1993) Predictive Inference: An Introduction, CRC Press. ISBN 0-412-03471-9

Sources

[edit]

Further reading

[edit]
[edit]
15一16岁青少年腰疼是什么病 小腿肿胀是什么原因引起的 无所不用其极是什么意思 棋逢对手下一句是什么 四川古代叫什么
张信哲为什么不结婚 先天愚型是什么病 斑秃吃什么药 嗳气和打嗝有什么区别 身份证号码最后一位代表什么
超滤是什么意思 近视散光是什么意思 pc是什么材料 知柏地黄丸适合什么人吃 肝钙化是什么意思
a4纸可以做什么手工 干预治疗是什么意思 苯甲酸钠是什么东西 身体出现小红点是什么原因 波涛澎湃是什么意思
死忠粉是什么意思hcv9jop0ns3r.cn 狗狗拉虫子又细又长吃什么药hcv8jop2ns5r.cn 刚怀孕有什么症状hcv8jop0ns5r.cn 雀舌是什么茶bjcbxg.com 四面弹是什么面料520myf.com
茶毫是什么hcv7jop6ns7r.cn hlh是什么病hcv8jop9ns5r.cn 膝关节退行性改变是什么意思hcv8jop0ns5r.cn 仓鼠吃什么hcv8jop6ns5r.cn 天杀的是什么意思hcv7jop7ns4r.cn
梅尼埃综合症是什么病hcv8jop4ns0r.cn 儿童嗓子疼吃什么药gysmod.com 9月11号是什么星座ff14chat.com 前降支中段心肌桥什么意思hcv9jop2ns7r.cn 湿气重吃什么药hcv7jop9ns5r.cn
糖醋鱼用什么鱼hcv8jop4ns9r.cn 君子兰叶子发黄是什么原因hcv8jop3ns5r.cn 阴唇为什么一大一小hanqikai.com mrmrs是什么牌子hcv8jop9ns7r.cn 血滴子是什么意思hcv8jop7ns1r.cn
百度