胆汁反流是什么原因引起的| y是什么元素| 吃红苋菜有什么好处| 睡觉时头晕是什么原因| 小狗吐白沫不吃东西没精神吃什么药| 一面之词是什么意思| 穿模是什么意思| 脚踩按摩垫有什么好处| 阳光明媚是什么意思| 阴茎插入阴道是什么感觉| 肠系膜淋巴结肿大吃什么药| 角质增生是什么意思| 女人得性疾病什么症状| 窦性心律吃什么药| 5月31号什么星座| 见红是什么样的| 四风是什么| 淋巴细胞数偏高是什么意思| 友友是什么意思| 经常吃辣椒有什么好处和坏处| 心脏长什么样| 细软是什么意思| 欲情故纵什么意思| 一个m是什么品牌| 男生下面疼是什么原因| 刘字五行属什么| 咽颊炎吃什么药| 犬瘟是什么原因引起的| 福建有什么好吃的| 换手率高说明什么| 扁桃体是什么| 痞气是什么意思| 心脏早搏有什么危害| 阿耨多罗三藐三菩提是什么意思| 眼睛胀痛什么原因| 肚子不舒服吃什么药| 荨麻疹吃什么药好的快| 血液四项检查是什么| 婴儿为什么吐奶| 什么人容易得帕金森病| 胃切除有什么影响| 32年婚姻是什么婚| 为什么会肚子痛| 巴扎黑是什么意思| jnby是什么牌子| 酉时右眼跳是什么预兆| 月亮什么时候是圆的| 米娜桑什么意思| scc什么意思| 尿路感染什么症状| 发泡实验是检查什么的| 肾和性功能有什么关系| 金牛座是什么象星座| 不放屁是什么原因| 吊膀子是什么意思| 长期失眠挂什么科| 尼古丁是什么| 脚气用什么药膏效果好| 正规医院减肥挂什么科| 地屈孕酮片什么时候吃| 甲磺酸倍他司汀片治什么病| 蒲公英和什么一起泡水喝最好| 胆固醇高吃什么| 胆在什么位置图片| 蜗牛爱吃什么食物| 肾结石不能吃什么| 孕妇做梦梦到蛇是什么意思| 小孩子肚子疼吃什么药| 晚上睡觉盗汗是什么原因| 右手手指头麻木是什么病的前兆| 吃什么消除肺部结节| 嘴角发麻是什么病前兆| 狗不吃饭是什么原因| 佛珠什么材质的最好| 畸胎瘤是什么| 1995年出生属什么| 4.13什么星座| 韭菜苔炒什么好吃| 没睡好头疼是什么原因| 红斑狼疮是什么引起的| 梦见黄鼠狼是什么意思| 梦到孩子死了是什么征兆| 什么的形象| 原点是什么| 一什么鸟窝| 低压高有什么危险| 醉清风是什么意思| 肺气泡是什么病| 反骨是什么意思| 硬度不够吃什么中成药| 弱水三千是什么意思| 什么样的梦才算是胎梦| 猫和狗为什么是天敌| 碳13和碳14有什么区别| 水生木是什么意思| hedgren是什么品牌| sage是什么颜色| 氨酚咖那敏片是什么药| 长期喝咖啡有什么好处和坏处| 土崩瓦解是什么意思| 长期喝奶粉有什么好处| 吃什么补血贫血| 肚脐眼上面痛是什么原因引起的| 自然流产是什么症状| 神经疼是什么原因| 总是抽筋是什么原因| 逍遥丸有什么作用| 五脏六腑指的是什么| 梦见西红柿是什么预兆| 诺是什么意思| 男宝胶囊为什么不建议吃| 多普勒超声检查是什么| 喝啤酒有什么好处| 梦见战争是什么兆头| 关口是什么意思| 70属什么生肖| 女性支原体阳性是什么意思| 湄公鱼是什么鱼| le是什么| hfp是什么意思| 山西人喜欢吃什么| 四月初四是什么节日| 吃核桃有什么好处| 油边是什么肉| 阿玛施属于什么档次| 康复治疗学学什么| 什么是安全期和排卵期| 什么是接触性皮炎| 什么水果对皮肤好祛痘| 什么是卫星| 什么是腕管综合征| 尿酸高会引发什么疾病| 羊肉不能和什么食物一起吃| 什么水果下火| 镇长属于什么级别| 3楼五行属什么| 血脂厚有什么症状| 鱼腥草泡水喝有什么功效| 眼皮突然肿了是什么原因| 马拉车是什么牌子的包| d2聚体高是什么意思| 动物的脖子有什么作用| 胆囊炎吃什么| 知我者莫若你什么意思| 专技十三级是什么意思| 排异反应是什么意思| 条件致病菌是什么意思| 95年属什么生肖| 1.29是什么星座| 嗓子哑了是什么原因| 蜜蜂吃什么食物| 足底麻木是什么原因| camel是什么意思| 贪吃的动物是什么生肖| 户名是什么| 灰指甲长什么样子图片| 口引念什么| 真菌感染皮肤病用什么药最好| 山花对什么| 250为什么是骂人的话| 隔离霜是干什么用的| 烤瓷牙是什么意思| 尿发红什么原因| 慎重的意思是什么| 淋巴细胞偏低是什么原因| 故事是什么意思| 提成是什么| 白细胞0是什么意思| 考军校要什么条件| 交尾是什么意思| 腿困是什么原因| 婴儿不睡觉是什么原因| 什么的小朋友填词语| hpv是什么疫苗| 7月22号是什么日子| 牛标志的车是什么牌子| 梦见自己会开车了是什么意思| 荔枝可以做什么菜| 芦荟有什么好处| 口臭舌苔白厚吃什么药| 英国为什么叫日不落帝国| 什么习习| 精神焦虑症有什么表现有哪些| 田园生活是什么意思| 哺乳期发烧吃什么药| 佯装是什么意思| 口苦是什么问题| 细胞结构包括什么| 甚嚣尘上什么意思| 寻麻疹吃什么药| 知了猴是什么| 盼头是什么意思| 头晕是什么情况| 要注意什么| 腹部ct平扫能检查出什么| 骨质疏松吃什么好| 臀位是什么意思| 软件开发需要学什么| 外交是什么意思| 梦见白发是什么意思| 助听器什么品牌最好| 名不见经传是什么意思| 感冒头疼是什么原因| 为什么越睡越困越疲惫| 突破性出血是什么意思| 梦见抢银行是什么意思| 工科和理科有什么区别| 冰箱灯不亮是什么原因| 甲钴胺片是治什么病| 子宫形态不规则是什么意思| 牙膏洗脸有什么好处| 默契的意思是什么| 十万个为什么儿童版| saa是什么检查| u是什么元素| 梅菜是什么菜晒干的| 放屁特别多是什么原因| 益生菌什么时候吃最好| 超敏c反应蛋白是什么| 遗精频繁是什么原因| 什么是bl| 动脉抽血为什么这么疼| 为什么刚小便完又有尿意| 扁桃体发炎有什么症状| 优甲乐什么时候吃最好| 阴唇肥大有什么影响| 丁毒豆泡酒能治什么病| 尿道口流脓吃什么药| 为什么突然长痣| 什么狗聪明听话又好养| 牙松动了还疼用什么方法处理最好| 欣五行属什么| 男性腰疼挂什么科| 低血压高吃什么药好| 低血糖是什么原因引起的| 欧芹在中国叫什么| 大什么大| 什么叫阴虚什么叫阳虚| 海参什么人不适合吃| 鼻烟壶是干什么用的| 种生基是什么意思| 提手旁的字与什么有关| 脉搏90左右意味着什么| 三焦热盛是什么意思| 相声海清是什么意思| 司马迁属什么生肖| 核酸是什么| 中性粒细胞偏高是什么原因| 病态是什么意思| 人突然晕倒是什么原因引起的| 男人性功能太强是什么原因| 广西狗肉节是什么时候| 肝郁有什么症状| 8月8号是什么星座| 鹅梨帐中香是什么| 为什么早上起来眼睛肿| 蛋蛋冰凉潮湿什么原因| 请问今晚买什么生肖| 艾绒是什么| 复原乳是什么意思| 拜阿司匹灵是什么药| 痛风急性发作期吃什么药| 高校新生是什么意思| 人到中年为什么会发胖| 百度Jump to content

国足国足世预赛中国男足

From Wikipedia, the free encyclopedia
百度 同治年间《申报》向社会征集诗文时,以“概不取其刻资”即不收版面费为鼓励,此时应征者多而版面有限。

In mathematics, a low-discrepancy sequence is a sequence with the property that for all values of , its subsequence has a low discrepancy.

Roughly speaking, the discrepancy of a sequence is low if the proportion of points in the sequence falling into an arbitrary set B is close to proportional to the measure of B, as would happen on average (but not for particular samples) in the case of an equidistributed sequence. Specific definitions of discrepancy differ regarding the choice of B (hyperspheres, hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value).

Low-discrepancy sequences are also called quasirandom sequences, due to their common use as a replacement of uniformly distributed random numbers. The "quasi" modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random nor pseudorandom, but such sequences share some properties of random variables and in certain applications such as the quasi-Monte Carlo method their lower discrepancy is an important advantage.

Applications

[edit]
Error in estimated kurtosis as a function of number of datapoints. 'Additive quasirandom' gives the maximum error when c = (5 ? 1)/2. 'Random' gives the average error over six runs of random numbers, where the average is taken to reduce the magnitude of the wild fluctuations

Quasirandom numbers have an advantage over pure random numbers in that they cover the domain of interest quickly and evenly.

Two useful applications are in finding the characteristic function of a probability density function, and in finding the derivative function of a deterministic function with a small amount of noise. Quasirandom numbers allow higher-order moments to be calculated to high accuracy very quickly.

Applications that don't involve sorting would be in finding the mean, standard deviation, skewness and kurtosis of a statistical distribution, and in finding the integral and global maxima and minima of difficult deterministic functions. Quasirandom numbers can also be used for providing starting points for deterministic algorithms that only work locally, such as Newton–Raphson iteration.

Quasirandom numbers can also be combined with search algorithms. With a search algorithm, quasirandom numbers can be used to find the mode, median, confidence intervals and cumulative distribution of a statistical distribution, and all local minima and all solutions of deterministic functions.

Low-discrepancy sequences in numerical integration

[edit]

Various methods of numerical integration can be phrased as approximating the integral of a function in some interval, e.g. [0,1], as the average of the function evaluated at a set in that interval:

If the points are chosen as , this is the rectangle rule. If the points are chosen to be randomly (or pseudorandomly) distributed, this is the Monte Carlo method. If the points are chosen as elements of a low-discrepancy sequence, this is the quasi-Monte Carlo method. A remarkable result, the Koksma–Hlawka inequality (stated below), shows that the error of such a method can be bounded by the product of two terms, one of which depends only on , and the other one is the discrepancy of the set .

It is convenient to construct the set in such a way that if a set with elements is constructed, the previous elements need not be recomputed. The rectangle rule uses points set which have low discrepancy, but in general the elements must be recomputed if is increased. Elements need not be recomputed in the random Monte Carlo method if is increased, but the point sets do not have minimal discrepancy. By using low-discrepancy sequences we aim for low discrepancy and no need for recomputations, but actually low-discrepancy sequences can only be incrementally good on discrepancy if we allow no recomputation.

Definition of discrepancy

[edit]

The discrepancy of a set is defined, using Niederreiter's notation, as

where is the -dimensional Lebesgue measure, is the number of points in that fall into , and is the set of -dimensional intervals or boxes of the form

where .

The star-discrepancy is defined similarly, except that the supremum is taken over the set of rectangular boxes of the form

where is in the half-open interval [0, 1).

The two are related by

Note: With these definitions, discrepancy represents the worst-case or maximum point density deviation of a uniform set. However, also other error measures are meaningful, leading to other definitions and variation measures. For instance, -discrepancy or modified centered -discrepancy are also used intensively to compare the quality of uniform point sets. Both are much easier to calculate for large and .

The Koksma–Hlawka inequality

[edit]

Let be the -dimensional unit cube, . Let have bounded variation on in the sense of Hardy and Krause. Then for any in ,

The KoksmaHlawka inequality is sharp in the following sense: For any point set in and any , there is a function with bounded variation and such that

Therefore, the quality of a numerical integration rule depends only on the discrepancy .

The formula of Hlawka–Zaremba

[edit]

Let . For we write and denote by the point obtained from x by replacing the coordinates not in u by . Then

where is the discrepancy function.

The L2 version of the Koksma–Hlawka inequality

[edit]

Applying the Cauchy–Schwarz inequality for integrals and sums to the Hlawka–Zaremba identity, we obtain an version of the Koksma–Hlawka inequality:

where

and

discrepancy has a high practical importance because fast explicit calculations are possible for a given point set. This way it is easy to create point set optimizers using discrepancy as criteria.

The Erd?s–Turán–Koksma inequality

[edit]

It is computationally hard to find the exact value of the discrepancy of large point sets. The Erd?sTuránKoksma inequality provides an upper bound.

Let be points in and be an arbitrary positive integer. Then

where

The main conjectures

[edit]

Conjecture 1. There is a constant depending only on the dimension , such that for any finite point set .

Conjecture 2. There is a constant depending only on :, such that:

for infinite number of for any infinite sequence .

These conjectures are equivalent. They have been proved for by W. M. Schmidt. In higher dimensions, the corresponding problem is still open. The best-known lower bounds are due to Michael Lacey and collaborators.

Lower bounds

[edit]

Let . Then

for any finite point set .

Let . W. M. Schmidt proved that for any finite point set ,

where

For arbitrary dimensions , K. F. Roth proved that

for any finite point set . Jozef Beck [1] established a double log improvement of this result in three dimensions. This was improved by D. Bilyk and M. T. Lacey to a power of a single logarithm. The best known bound for s > 2 is due D. Bilyk and M. T. Lacey and A. Vagharshakyan.[2] There exists a depending on s so that

for any finite point set .

Construction of low-discrepancy sequences

[edit]

Because any distribution of random numbers can be mapped onto a uniform distribution, and quasirandom numbers are mapped in the same way, this article only concerns generation of quasirandom numbers on a multidimensional uniform distribution.

There are constructions of sequences known such that where is a certain constant, depending on the sequence. After Conjecture 2, these sequences are believed to have the best possible order of convergence. Examples below are the van der Corput sequence, the Halton sequences, and the Sobol’ sequences. One general limitation is that construction methods can usually only guarantee the order of convergence. Practically, low discrepancy can be only achieved if is large enough, and for large given s this minimum can be very large. This means running a Monte-Carlo analysis with e.g. variables and points from a low-discrepancy sequence generator may offer only a very minor accuracy improvement [citation needed].

Random numbers

[edit]

Sequences of quasirandom numbers can be generated from random numbers by imposing a negative correlation on those random numbers. One way to do this is to start with a set of random numbers on and construct quasirandom numbers which are uniform on using:

for odd and for even.

A second way to do it with the starting random numbers is to construct a random walk with offset 0.5 as in:

That is, take the previous quasirandom number, add 0.5 and the random number, and take the result modulo 1.

For more than one dimension, Latin squares of the appropriate dimension can be used to provide offsets to ensure that the whole domain is covered evenly.

Coverage of the unit square. Left for additive quasirandom numbers with c = 0.5545497..., 0.308517... Right for random numbers. From top to bottom. 10, 100, 1000, 10000 points.

Additive recurrence

[edit]

For any irrational , the sequence

has discrepancy tending to . Note that the sequence can be defined recursively by

A good value of gives lower discrepancy than a sequence of independent uniform random numbers.

The discrepancy can be bounded by the approximation exponent of . If the approximation exponent is , then for any , the following bound holds:[3]

By the Thue–Siegel–Roth theorem, the approximation exponent of any irrational algebraic number is 2, giving a bound of above.

The recurrence relation above is similar to the recurrence relation used by a linear congruential generator, a poor-quality pseudorandom number generator:[4]

For the low discrepancy additive recurrence above, a and m are chosen to be 1. Note, however, that this will not generate independent random numbers, so should not be used for purposes requiring independence.

The value of with lowest discrepancy is the fractional part of the golden ratio:[5]

Another value that is nearly as good is the fractional part of the silver ratio, which is the fractional part of the square root of 2:

In more than one dimension, separate quasirandom numbers are needed for each dimension. A convenient set of values that are used, is the square roots of primes from two up, all taken modulo 1:

However, a set of values based on the generalised golden ratio has been shown to produce more evenly distributed points. [6]

The list of pseudorandom number generators lists methods for generating independent pseudorandom numbers. Note: In few dimensions, recursive recurrence leads to uniform sets of good quality, but for larger (like ) other point set generators can offer much lower discrepancies.

van der Corput sequence

[edit]

Let

be the -ary representation of the positive integer , i.e. . Set

Then there is a constant depending only on such that satisfies

where is the star discrepancy.

Halton sequence

[edit]
First 256 points of the (2,3) Halton sequence

The Halton sequence is a natural generalization of the van der Corput sequence to higher dimensions. Let s be an arbitrary dimension and b1, ..., bs be arbitrary coprime integers greater than 1. Define

Then there is a constant C depending only on b1, ..., bs, such that sequence {x(n)}n≥1 is a s-dimensional sequence with

Hammersley set

[edit]
2D Hammersley set of size 256

Let be coprime positive integers greater than 1. For given and , the -dimensional Hammersley set of size is defined by[7]

for . Then

where is a constant depending only on .

Note: The formulas show that the Hammersley set is actually the Halton sequence, but we get one more dimension for free by adding a linear sweep. This is only possible if is known upfront. A linear set is also the set with lowest possible one-dimensional discrepancy in general. Unfortunately, for higher dimensions, no such "discrepancy record sets" are known. For , most low-discrepancy point set generators deliver at least near-optimum discrepancies.

Sobol sequence

[edit]

The Antonov–Saleev variant of the Sobol’ sequence generates numbers between zero and one directly as binary fractions of length from a set of special binary fractions, called direction numbers. The bits of the Gray code of , , are used to select direction numbers. To get the Sobol’ sequence value take the exclusive or of the binary value of the Gray code of with the appropriate direction number. The number of dimensions required affects the choice of .

Poisson disk sampling

[edit]

Poisson disk sampling is popular in video games to rapidly place objects in a way that appears random-looking but guarantees that every two points are separated by at least the specified minimum distance.[8] This does not guarantee low discrepancy (as e. g. Sobol’), but at least a significantly lower discrepancy than pure random sampling. The goal of these sampling patterns is based on frequency analysis rather than discrepancy, a type of so-called "blue noise" patterns.

Graphical examples

[edit]

The points plotted below are the first 100, 1000, and 10000 elements in a sequence of the Sobol' type. For comparison, 10000 elements of a sequence of pseudorandom points are also shown. The low-discrepancy sequence was generated by TOMS algorithm 659.[9] An implementation of the algorithm in Fortran is available from Netlib.

Low discrepancy 100.png Low discrepancy 1000.png
The first 100 points in a low-discrepancy sequence of the Sobol' type. The first 1000 points in the same sequence. These 1000 comprise the first 100, with 900 more points.
Low discrepancy 10000.png Random 10000.png
The first 10000 points in the same sequence. These 10000 comprise the first 1000, with 9000 more points. For comparison, here are the first 10000 points in a sequence of uniformly distributed pseudorandom numbers. Regions of higher and lower density are evident.


See also

[edit]

Notes

[edit]
  1. ^ Beck, József (1989). "A two-dimensional van Aardenne-Ehrenfest theorem in irregularities of distribution". Compositio Mathematica. 72 (3): 269–339. MR 1032337. S2CID 125940424. Zbl 0691.10041.
  2. ^ Bilyk, Dmitriy; Lacey, Michael T.; Vagharshakyan, Armen (2008). "On the Small Ball Inequality in all dimensions". Journal of Functional Analysis. 254 (9): 2470–2502. arXiv:0705.4619. doi:10.1016/j.jfa.2007.09.010. S2CID 14234006.
  3. ^ Kuipers & Niederreiter 2005, p. 123
  4. ^ Knuth, Donald E. "Chapter 3 – Random Numbers". The Art of Computer Programming. Vol. 2.
  5. ^ Skarupke, Malte (16 June 2018). "Fibonacci Hashing: The Optimization that the World Forgot". One property of the Golden Ratio is that you can use it to subdivide any range roughly evenly ... if you don't know ahead of time how many steps you're going to take
  6. ^ Roberts, Martin (2018). "The Unreasonable Effectiveness of Quasirandom Sequences". Archived from the original on 1 March 2025.
  7. ^ Hammersley, J. M.; Handscomb, D. C. (1964). Monte Carlo Methods. doi:10.1007/978-94-009-5819-7. ISBN 978-94-009-5821-0. {{cite book}}: ISBN / Date incompatibility (help)
  8. ^ Herman Tulleken. Tulleken, Herman (March 2008). "Poisson Disk Sampling". Dev.Mag. No. 21. pp. 21–25.
  9. ^ Bratley, Paul; Fox, Bennett L. (1988). "Algorithm 659". ACM Transactions on Mathematical Software. 14: 88–100. doi:10.1145/42288.214372. S2CID 17325779.

References

[edit]
  • Dick, Josef; Pillichshammer, Friedrich (2010). Digital Nets and Sequences: Discrepancy Theory and Quasi-Monte Carlo Integration. Cambridge University Press. ISBN 978-0-521-19159-3.
  • Kuipers, L.; Niederreiter, H. (2005), Uniform distribution of sequences, Dover Publications, ISBN 0-486-45019-8
  • Harald Niederreiter (1992). Random Number Generation and Quasi-Monte Carlo Methods. Society for Industrial and Applied Mathematics. ISBN 0-89871-295-5.
  • Drmota, Michael; Tichy, Robert F. (1997). Sequences, Discrepancies and Applications. Lecture Notes in Math. Vol. 1651. Springer. ISBN 3-540-62606-9.
  • Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1992). Numerical Recipes in C (2nd ed.). Cambridge University Press. see Section 7.7 for a less technical discussion of low-discrepancy sequences. ISBN 0-521-43108-5.
[edit]
金不换是什么 吃什么不升血糖 gn是什么意思 21速和24速有什么区别 梦见老人去世预示什么
生活质量是什么意思 槟榔是什么东西 和田玉对身体有什么好处 看肝胆挂什么科 印度以什么人种为主
肺气泡吃什么药 c k是什么牌子 北京大裤衩建筑叫什么 冰字五行属什么 什么样的人不能坐飞机
陶和瓷有什么区别 血栓是什么 非营运车辆是什么意思 肛门瘙痒用什么药好 bally什么牌子
牙釉质是什么hcv7jop7ns0r.cn 什么是走读生cl108k.com 不能晒太阳是什么病hcv7jop9ns4r.cn 山药不能和什么一起吃beikeqingting.com 旅游的意义是什么xianpinbao.com
苹果什么时候出新手机hcv9jop0ns2r.cn 心脏房颤吃什么药最好wuhaiwuya.com 热血病是什么病hcv9jop2ns6r.cn ak是什么sscsqa.com 吃什么油最健康排行榜hcv9jop3ns8r.cn
猪脚炖什么hcv8jop9ns9r.cn 2028是什么年hcv9jop3ns7r.cn 食字五行属什么hcv8jop1ns6r.cn 北京市副市长什么级别hcv7jop5ns6r.cn 两横一竖是什么字jingluanji.com
沙僧的武器叫什么hcv8jop8ns4r.cn 梦见前婆婆是什么意思hcv8jop7ns9r.cn 吐黑水是什么原因adwl56.com 脚癣是什么原因引起的hcv9jop8ns0r.cn 大舌头是什么意思hcv9jop5ns9r.cn
百度