血糖高吃什么| 住院报销需要什么材料| 小县城适合做什么生意| 异什么意思| jeans是什么品牌| 7月27号是什么星座| 过敏去医院挂什么科| 风寒感冒吃什么药效果好| kpl是什么意思| 胃复安又叫什么| 大众什么车最贵| 什么是教育| nt什么货币| 河堤是什么意思| 私事是什么意思| 兔子的耳朵有什么作用| 什么冲冲| 白痰多是什么原因造成的| 粗人是什么意思| 五音是什么| 腌肉放什么调料| 什么是阳历| 秦始皇是什么生肖| 胃不好吃什么水果| 头伏饺子二伏面三伏吃什么| 尔时是什么意思| 牙根疼吃什么药最好| 人流后什么时候来月经| 什么的问题| 乙肝有抗体是什么意思| 水淀粉是什么东西| 富丽堂皇是什么意思| 157是什么意思| 1211是什么星座| 蛋白质被消化成什么| 最高人民法院院长什么级别| 马鞍皮是什么皮| 看乳腺应该挂什么科| 40gp是什么意思| od是什么意思| 金木水火土各代表什么| 什么辉煌四字词语| 香菜不能和什么一起吃| 喉咙干痒是什么原因| 1月29日是什么星座| 山竹有什么营养| 什么叫走读生| spandex是什么面料| 咽炎挂什么科| 莲蓬可以用来做什么| 通班是什么意思| 孕妇贫血对胎儿有什么影响| 人为什么会怕鬼| 瞳孔缩小见于什么病| 为什么老是掉头发| 失眠吃什么药| 坛城是什么意思| y什么意思| 阴道口出血是什么原因| 南京有什么玩的| 疳积是什么病| 飞是什么结构| 梦见玫瑰花是什么预兆| 梦见前婆婆是什么意思| dmf是什么溶剂| 沐五行属性是什么| a4纸可以做什么手工| 农历八月十五是什么节日| 儿童流鼻血什么原因引起的| 尿检弱阳性是什么意思| 肠胀气是什么原因引起的| 小孩便秘吃什么食物好| 类风湿性关节炎用什么药| 什么是钝角| 1938年属什么| 买车置换是什么意思| 中央候补委员是什么级别| 胰腺癌有什么症状| 婴儿补钙什么牌子的好| 头晕喝什么饮料| 猝死是什么意思| 8月2号是什么星座| 大姨父是什么意思| 骨质增生吃什么药效果好| 什么相处| 手指甲发黑是什么原因| 年底是什么时候| 胃酸吃什么可以缓解| 白带多用什么药效果好| 胃炎伴糜烂吃什么药效果好| 魏丑夫和芈月什么关系| 牙龈痛吃什么药| 梦见买帽子是什么意思| 自刎是什么意思| 腊猪脚炖什么好吃| 夜宵是什么意思| 听阴天说什么| 5月28日是什么星座| 颠是什么意思| 转氨酶高是什么引起的| cenxino手表是什么牌子| 维生素b4又叫什么| 音色是什么意思| 周围型肺ca是什么意思| 98年属相是什么| 北京为什么叫帝都| 青团是什么节日吃的| 点了痣要注意什么| 猪生肠是什么部位| 喝什么水去火| 什么动物跑得快| 身上痒是什么原因引起的| 有什么好吃的| 体重kg是什么意思| 什么是交感神经紊乱| vae是什么意思| 被鬼缠身有什么症状| 网易是干什么的| 肺气泡是什么病| 澄面是什么面粉| 人体最大的消化腺是什么| 菜板什么木材最好| 唇炎去医院挂什么科| 小孩口臭吃什么药效果最好| 月光族是什么意思啊| 中队长是什么级别| 腿血栓什么症状| 牙冠什么材质的好| 圆房是什么意思| 银针白毫是什么茶| 2月2号什么星座| 火和什么相生| 用盐洗头发有什么好处| 昱字五行属什么| 菊花脑是什么菜| 喝咖啡胃疼是什么原因| zs是什么意思| 蚊子是什么动物| 扁桃体肥大吃什么药好得快| 癌胚抗原是什么意思| 买手是什么职业| 呱唧呱唧是什么意思| 咖啡有什么好处| 给你脸了是什么意思| 妈宝男什么意思| 肝肾不足吃什么中成药| 极是什么意思| jimmychoo是什么牌子| 不经历风雨怎能见彩虹是什么意思| 公务员是做什么工作的| 有待提高是什么意思| 夏枯草有什么作用| 人为什么会脱发| 妊娠是什么意思| 低压高什么原因| 冲是什么意思| 早上8点到9点是什么时辰| 牛在五行中属什么| 八月十五是什么节日| 加菲猫是什么品种| 能量守恒是什么意思| 月经量少吃什么药调理| 饭后呕吐是什么原因引起的| 脚后跟干裂是什么原因| 赫尔墨斯是什么神| 吃饭咬舌头是什么原因| 笑什么| 女生下体长什么样子| 宫颈锥切术是什么意思| 脑梗有什么前兆| 188什么意思| 卧室养什么花好| 滑板鞋是什么鞋| 舌头锯齿状是什么原因| 太乙是什么意思| 马润什么意思| 天津是什么省| oc是什么意思| 糖类抗原高是什么意思| 9月24号是什么星座| 漫山遍野是什么生肖| 阿魏酸是什么| viscose是什么面料| 手心发黄是什么原因| 左肾积水有什么症状| 什么是假性高血压| 怀孕肚子疼是什么原因| 晚饭吃什么好| 过是什么结构| 乙肝两对半145阳性是什么意思| 夏季吃什么菜最好菜谱| 核桃不能和什么一起吃| 3月21日是什么星座| 荷花是什么季节开的| 孕妇头疼可以吃什么药| 小腹一直疼是什么原因| 女生排卵是什么意思| 什么是阴虚| 梅长苏是什么电视剧| 七月九号是什么星座| 福建安溪名茶是什么| 夜黑风高什么意思| aed是什么| 鼻孔里面痒是什么原因| 洛基是什么神| 肺气肿用什么药效果好| 老狐狸是什么意思| 月经不调是什么原因| 什么无云| 华胥是什么意思| 月经不能吃什么水果| 家政是干什么的| 天文是什么意思| 日抛什么意思| tv是什么意思| 一个歹一个殇读什么| 收孕妇尿是干什么用的| 心窝窝疼是什么原因| 二月是什么星座| 湿气重不能吃什么食物| 女生的下面长什么样| 查染色体的目的是什么| 药物流产后需要注意什么| 举的部首是什么| 睡觉经常做梦是什么原因| 落花雨你飘摇的美丽是什么歌| 启攒是什么意思| 三情六欲是什么意思| 什么歌最好听| 璐字五行属什么| 心慌心悸吃什么药| 什么是目标| 屈原是什么朝代| 白兰地是什么酒| 智商是什么意思| 燃眉之急是什么意思| 慢性病都包括什么病| 血压低压高是什么原因造成的| 尿道口痛什么原因| 孕妇为什么不能参加婚礼| 供奉是什么意思| er什么意思| 时兴是什么意思| 飞机杯长什么样子| 香叶是什么树叶| 减肥期间吃什么好| 家门不幸是什么意思| 不稀罕是什么意思| 边鱼是什么鱼| 为什么要睡觉| 吃什么卵泡长得快又好| 为什么这么热| 流产期间吃什么好| 鹞是什么意思| 白质脱髓鞘是什么病| 打飞机什么意思| 左眼跳是什么原因| 梅花是什么颜色的| 粉色分泌物是什么原因| 女性分泌物少是什么原因| 乳腺发炎有什么症状| 乳腺增生吃什么| 焦糖色裤子配什么颜色上衣| 胎停是什么原因引起的| 百度Jump to content

过时的家庭装修方式太毁房!看看你家有没有中招

From Wikipedia, the free encyclopedia
百度 近日,乐业大队为辖区的7个乡镇志愿消防队员开展消防技能指导培训。

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it.[1] Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.

The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory. Both areas are highly related, as the complexity of an algorithm is always an upper bound on the complexity of the problem solved by this algorithm. Moreover, for designing efficient algorithms, it is often fundamental to compare the complexity of a specific algorithm to the complexity of the problem to be solved. Also, in most cases, the only thing that is known about the complexity of a problem is that it is lower than the complexity of the most efficient known algorithms. Therefore, there is a large overlap between analysis of algorithms and complexity theory.

As the amount of resources required to run an algorithm generally varies with the size of the input, the complexity is typically expressed as a function nf(n), where n is the size of the input and f(n) is either the worst-case complexity (the maximum of the amount of resources that are needed over all inputs of size n) or the average-case complexity (the average of the amount of resources over all inputs of size n). Time complexity is generally expressed as the number of required elementary operations on an input of size n, where elementary operations are assumed to take a constant amount of time on a given computer and change only by a constant factor when run on a different computer. Space complexity is generally expressed as the amount of memory required by an algorithm on an input of size n.

Resources

[edit]

Time

[edit]

The resource that is most commonly considered is time. When "complexity" is used without qualification, this generally means time complexity.

The usual units of time (seconds, minutes etc.) are not used in complexity theory because they are too dependent on the choice of a specific computer and on the evolution of technology. For instance, a computer today can execute an algorithm significantly faster than a computer from the 1960s; however, this is not an intrinsic feature of the algorithm but rather a consequence of technological advances in computer hardware. Complexity theory seeks to quantify the intrinsic time requirements of algorithms, that is, the basic time constraints an algorithm would place on any computer. This is achieved by counting the number of elementary operations that are executed during the computation. These operations are assumed to take constant time (that is, not affected by the size of the input) on a given machine, and are often called steps.

Bit complexity

[edit]

Formally, the bit complexity refers to the number of operations on bits that are needed for running an algorithm. With most models of computation, it equals the time complexity up to a constant factor. On computers, the number of operations on machine words that are needed is also proportional to the bit complexity. So, the time complexity and the bit complexity are equivalent for realistic models of computation.

Space

[edit]

Another important resource is the size of computer memory that is needed for running algorithms.

Communication

[edit]

For the class of distributed algorithms that are commonly executed by multiple, interacting parties, the resource that is of most interest is the communication complexity. It is the necessary amount of communication between the executing parties.

Others

[edit]

The number of arithmetic operations is another resource that is commonly used. In this case, one talks of arithmetic complexity. If one knows an upper bound on the size of the binary representation of the numbers that occur during a computation, the time complexity is generally the product of the arithmetic complexity by a constant factor.

For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore, the time complexity, generally called bit complexity in this context, may be much larger than the arithmetic complexity. For example, the arithmetic complexity of the computation of the determinant of a n×n integer matrix is for the usual algorithms (Gaussian elimination). The bit complexity of the same algorithms is exponential in n, because the size of the coefficients may grow exponentially during the computation. On the other hand, if these algorithms are coupled with multi-modular arithmetic, the bit complexity may be reduced to O~(n4).

In sorting and searching, the resource that is generally considered is the number of entry comparisons. This is generally a good measure of the time complexity if data are suitably organized.

Complexity as a function of input size

[edit]

It is impossible to count the number of steps of an algorithm on all possible inputs. As the complexity generally increases with the size of the input, the complexity is typically expressed as a function of the size n (in bits) of the input, and therefore, the complexity is a function of n. However, the complexity of an algorithm may vary dramatically for different inputs of the same size. Therefore, several complexity functions are commonly used.

The worst-case complexity is the maximum of the complexity over all inputs of size n, and the average-case complexity is the average of the complexity over all inputs of size n (this makes sense, as the number of possible inputs of a given size is finite). Generally, when "complexity" is used without being further specified, this is the worst-case time complexity that is considered.

Asymptotic complexity

[edit]

It is generally difficult to compute precisely the worst-case and the average-case complexity. In addition, these exact values provide little practical application, as any change of computer or of model of computation would change the complexity somewhat. Moreover, the resource use is not critical for small values of n, and this makes that, for small n, the ease of implementation is generally more interesting than a low complexity.

For these reasons, one generally focuses on the behavior of the complexity for large n, that is on its asymptotic behavior when n tends to the infinity. Therefore, the complexity is generally expressed by using big O notation.

For example, the usual algorithm for integer multiplication has a complexity of this means that there is a constant such that the multiplication of two integers of at most n digits may be done in a time less than This bound is sharp in the sense that the worst-case complexity and the average-case complexity are which means that there is a constant such that these complexities are larger than The radix does not appear in these complexity, as changing of radix changes only the constants and

Models of computation

[edit]

The evaluation of the complexity relies on the choice of a model of computation, which consists in defining the basic operations that are done in a unit of time. When the model of computation is not explicitly specified, it is generally implicitely assumed as being a multitape Turing machine, since several more realistic models of computation, such as random-access machines are asymptotically equivalent for most problems. It is only for very specific and difficult problems, such as integer multiplication in time that the explicit definition of the model of computation is required for proofs.

Deterministic models

[edit]

A deterministic model of computation is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state. Historically, the first deterministic models were recursive functions, lambda calculus, and Turing machines. The model of random-access machines (also called RAM-machines) is also widely used, as a closer counterpart to real computers.

When the model of computation is not specified, it is generally assumed to be a multitape Turing machine. For most algorithms, the time complexity is the same on multitape Turing machines as on RAM-machines, although some care may be needed in how data is stored in memory to get this equivalence.

Non-deterministic computation

[edit]

In a non-deterministic model of computation, such as non-deterministic Turing machines, some choices may be done at some steps of the computation. In complexity theory, one considers all possible choices simultaneously, and the non-deterministic time complexity is the time needed, when the best choices are always done. In other words, one considers that the computation is done simultaneously on as many (identical) processors as needed, and the non-deterministic computation time is the time spent by the first processor that finishes the computation. This parallelism is partly amenable to quantum computing via superposed entangled states in running specific quantum algorithms, like e.g. Shor's factorization of yet only small integers (as of March 2018: 21 = 3 × 7).

Even when such a computation model is not realistic yet, it has theoretical importance, mostly related to the P = NP problem, which questions the identity of the complexity classes formed by taking "polynomial time" and "non-deterministic polynomial time" as least upper bounds. Simulating an NP-algorithm on a deterministic computer usually takes "exponential time". A problem is in the complexity class NP, if it may be solved in polynomial time on a non-deterministic machine. A problem is NP-complete if, roughly speaking, it is in NP and is not easier than any other NP problem. Many combinatorial problems, such as the Knapsack problem, the travelling salesman problem, and the Boolean satisfiability problem are NP-complete. For all these problems, the best known algorithm has exponential complexity. If any one of these problems could be solved in polynomial time on a deterministic machine, then all NP problems could also be solved in polynomial time, and one would have P = NP. As of 2017 it is generally conjectured that P ≠ NP, with the practical implication that the worst cases of NP problems are intrinsically difficult to solve, i.e., take longer than any reasonable time span (decades!) for interesting lengths of input.

Parallel and distributed computation

[edit]

Parallel and distributed computing consist of splitting computation on several processors, which work simultaneously. The difference between the different model lies mainly in the way of transmitting information between processors. Typically, in parallel computing the data transmission between processors is very fast, while, in distributed computing, the data transmission is done through a network and is therefore much slower.

The time needed for a computation on N processors is at least the quotient by N of the time needed by a single processor. In fact this theoretically optimal bound can never be reached, because some subtasks cannot be parallelized, and some processors may have to wait a result from another processor.

The main complexity problem is thus to design algorithms such that the product of the computation time by the number of processors is as close as possible to the time needed for the same computation on a single processor.

Quantum computing

[edit]

A quantum computer is a computer whose model of computation is based on quantum mechanics. The Church–Turing thesis applies to quantum computers; that is, every problem that can be solved by a quantum computer can also be solved by a Turing machine. However, some problems may theoretically be solved with a much lower time complexity using a quantum computer rather than a classical computer. This is, for the moment, purely theoretical, as no one knows how to build an efficient quantum computer.

Quantum complexity theory has been developed to study the complexity classes of problems solved using quantum computers. It is used in post-quantum cryptography, which consists of designing cryptographic protocols that are resistant to attacks by quantum computers.

Problem complexity (lower bounds)

[edit]

The complexity of a problem is the infimum of the complexities of the algorithms that may solve the problem[citation needed], including unknown algorithms. Thus the complexity of a problem is not greater than the complexity of any algorithm that solves the problems.

It follows that every complexity of an algorithm, that is expressed with big O notation, is also an upper bound on the complexity of the corresponding problem.

On the other hand, it is generally hard to obtain nontrivial lower bounds for problem complexity, and there are few methods for obtaining such lower bounds.

For solving most problems, it is required to read all input data, which, normally, needs a time proportional to the size of the data. Thus, such problems have a complexity that is at least linear, that is, using big omega notation, a complexity

The solution of some problems, typically in computer algebra and computational algebraic geometry, may be very large. In such a case, the complexity is lower bounded by the maximal size of the output, since the output must be written. For example, a system of n polynomial equations of degree d in n indeterminates may have up to complex solutions, if the number of solutions is finite (this is Bézout's theorem). As these solutions must be written down, the complexity of this problem is For this problem, an algorithm of complexity is known, which may thus be considered as asymptotically quasi-optimal.

A nonlinear lower bound of is known for the number of comparisons needed for a sorting algorithm. Thus the best sorting algorithms are optimal, as their complexity is This lower bound results from the fact that there are n! ways of ordering n objects. As each comparison splits in two parts this set of n! orders, the number of N of comparisons that are needed for distinguishing all orders must verify which implies by Stirling's formula.

A standard method for getting lower bounds of complexity consists of reducing a problem to another problem. More precisely, suppose that one may encode a problem A of size n into a subproblem of size f(n) of a problem B, and that the complexity of A is Without loss of generality, one may suppose that the function f increases with n and has an inverse function h. Then the complexity of the problem B is This is the method that is used to prove that, if P ≠ NP (an unsolved conjecture), the complexity of every NP-complete problem is for every positive integer k.

Use in algorithm design

[edit]

Evaluating the complexity of an algorithm is an important part of algorithm design, as this gives useful information on the performance that may be expected.

It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers. This is wrong because this power increase allows working with large input data (big data). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the quicksort and merge sort require only comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For n = 1,000,000, this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second.

Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation.

See also

[edit]

References

[edit]
  1. ^ Vadhan, Salil (2011), "Computational Complexity" (PDF), in van Tilborg, Henk C. A.; Jajodia, Sushil (eds.), Encyclopedia of Cryptography and Security, Springer, pp. 235–240, doi:10.1007/978-1-4419-5906-5_442, ISBN 9781441959065
什么是甲减有什么症状 钙化淋巴结是什么意思 pca是什么意思 4a广告公司什么意思 失常是什么意思
桂林山水下一句是什么 吃什么水果可以降火 空孕囊是什么原因造成的 1月12号是什么星座 月经量减少是什么原因
拔牙后吃什么消炎药最好 为什么鸡蛋不能和牛奶一起吃 感冒不能吃什么水果 肉蔻是什么样子 原研药是什么意思
面瘫吃什么药好得快 甲状腺结节什么东西不能吃 女人40不惑什么意思 血压高压高低压正常是什么原因 玉髓是什么材质
大公鸡是什么牌子hcv9jop4ns7r.cn 伏天从什么时候开始hcv9jop3ns4r.cn 化疗和放疗什么区别bjhyzcsm.com 甲状腺双叶结节什么意思hcv7jop7ns3r.cn 丙类药一般是什么药hcv8jop6ns8r.cn
又什么又什么的葡萄hcv9jop0ns2r.cn 为什么要备孕clwhiglsz.com 嘘寒问暖是什么意思1949doufunao.com 0l是什么意思hcv9jop7ns5r.cn 牙龈发炎吃什么消炎药huizhijixie.com
猕猴桃什么时候成熟hcv8jop4ns0r.cn 什么充电宝能带上飞机hcv9jop6ns2r.cn 三岁看大七岁看老什么意思hcv8jop5ns0r.cn 网罗是什么意思hcv9jop0ns2r.cn 毛新宇什么级别hebeidezhi.com
艾滋通过什么途径传播hcv8jop0ns5r.cn 1997年出生属什么chuanglingweilai.com 腹泻吃什么药最有效ff14chat.com 拜观音菩萨有什么讲究hcv7jop9ns6r.cn 捉奸什么意思travellingsim.com
百度