双身什么意思| 美版苹果和国行有什么区别| 硫酸羟氯喹片是治什么病| 疏朗是什么意思| 梦见煮鱼有什么预兆| 应激是什么意思| 皮肤瘙痒吃什么药| 结婚23年是什么婚| 2014年属什么生肖| 男士检查精子挂什么科| 为什么叫汉族| 芋圆是用什么做的| 透亮是什么意思| 无所事事是什么意思| 食管炎是什么原因引起的| 济南有什么景点| 湿疹是什么引起的| pdc是什么意思| bull是什么意思| 放屁臭什么原因| 1945年属什么生肖| 中国最大的岛屿是什么| 窥视是什么意思| 缺碘吃什么| 梦到掉牙齿是什么预兆| mc是什么意思啊| 凤仙花什么时候开花| 南通有什么特产| 什么东西不能吃| 梦见别人流血是什么预兆| 老舍有什么称号| 什么颜色可以调成红色| 药流后吃什么消炎药| 支原体感染用什么药| 痰中带血吃什么药| 腰两侧疼痛是什么原因| 双子座和什么星座最配| 芽菜是什么菜| hcg低有什么补救的办法| 维生素c什么牌子好| 弯弯的月亮像什么| 间质性肺病是什么意思| rian是什么意思| 盆腔积液什么意思| 眉毛长白毛是什么征兆| 肾结晶是什么意思| 一个丝一个鸟读什么| 身主天机是什么意思| 肛周湿疹挂什么科| biemlfdlkk是什么牌子| 智齿冠周炎吃什么消炎药| 股骨头坏死有什么症状| 多喝水有什么好处坏处| 人绒毛膜促性腺激素是什么意思| 早上口干舌燥是什么原因| 两个人能玩什么游戏| 小孩子头发黄是什么原因| 樵夫是什么意思| 宝宝不长肉是什么原因| 股骨头坏死吃什么药| 早上打碎碗是什么兆头| 宝宝睡觉头上出汗多是什么原因| 高血压吃什么助勃药好| 安乃近片是什么药| 身份证后四位是什么意思| 吃什么可以增肥| 什么是肽| 为什么会闰月| 天王星是什么颜色| 醋泡什么壮阳最快| 吃什么降血压效果最好| 输血前常规检查是什么| 农历今天什么日子| lz什么意思| 玉米芯有什么用途| se是什么| 三焦湿热吃什么中成药| 胃气上逆是什么原因| 财神在什么方位| 什么夺目| 家里为什么会有跳蚤| 沉默寡言是什么意思| 什么鱼最好养| 唾液分泌过多是什么原因| 大头鱼是什么鱼| 谷氨酸是什么| 什么是用户名| 发达国家的标准是什么| 狮子座女和什么星座最配| 袍哥什么意思| 县政府党组成员什么级别| 巴字加一笔是什么字| 化学阉割是什么| 9月29是什么星座| 西瓜都有什么品种| 一根筋是什么意思| 庚五行属什么| 郡肝是什么| 肾不好会出现什么症状| 什么叫健康| sara是什么意思| 老过敏是缺什么维生素| 突然头疼是什么原因| 反流性食管炎吃什么药好| 辣木籽有什么功效| 1027是什么星座| 喝酒不能吃什么水果| 蛇为什么有毒| 花都有什么花| 幽门螺旋杆菌用什么药治疗| 怀孕挂什么科| 一月14号是什么星座| 阴道松弛吃什么药| 脂肪肝喝什么茶| 喜用神是什么意思| 亢进是什么意思| 阴道口出血是什么原因| 内科包括什么| 好吧是什么意思| 桃子吃了有什么好处| 什么的看| 做大生化挂什么科| 白细胞偏高是什么意思| 赭色是什么颜色| 御是什么意思| 朗格手表什么档次| 增生性贫血是什么意思| 龙根是什么| 吃什么可以降尿酸| 白头发缺什么微量元素| 民族是什么意思| 爆缸是什么意思| 中人是什么意思| 全血细胞减少是什么意思| 忠于自己是什么意思| fourone是什么牌子| 补丁是什么意思| 副军级是什么级别| 全身酸痛是什么原因| 五粮液什么香型| 互粉是什么意思| 倒模是什么| 婴儿蚊虫叮咬红肿用什么药| 日不落是什么意思| 火烈鸟吃什么| 神经衰弱是什么| 小便尿道刺痛吃什么药| 早入簧门姓氏标什么意思| 肉五行属什么| 脸上长斑是什么原因引起的| sr是什么意思| cyl是什么意思| apc是什么牌子| 宫内囊性回声代表什么| 双喜临门指什么生肖| 到付是什么意思| mophie是什么牌子| 宫颈病变是什么| 吃什么治肝病| 什么是痰湿| 猫能吃什么| 吃什么水果能变白| 什么叫肺纤维化| 洗衣机什么牌子的好| 桂圆补什么| 7月7日是什么纪念日| 跑马了是什么意思| 腋下皮肤发黑是什么原因引起的| 4月16什么星座| 肺栓塞的主要症状是什么| 补气血吃什么中成药最好| ntl是什么意思| 耳钉什么材质的好| 哦是什么意思在聊天时| 翻墙是什么| 尿臭是什么原因男性| 观字五行属什么| 夫妻肺片里面都有什么| 泌乳素高有什么影响| 心肌缺血吃什么好| 山五行属什么| 减肥最好的办法是什么| 水瓶座前面是什么星座| 早上起来口干口苦口臭是什么原因| 奶粉什么时候喝最好| 办理户口迁移需要什么材料| wht什么颜色| 时至今日是什么意思| 猫咪呕吐吃什么药可以解决| 做胃镜之前需要做什么准备| 心大是什么意思| 脚掌痒是什么原因| 老放臭屁是什么原因| 伽蓝菩萨保佑什么| 周杰伦为什么叫周董| 皮肤软组织感染用什么消炎药| 食是代表什么生肖| 投诉护士找什么部门| 夏天怕热冬天怕冷是什么体质| 元武道是什么| 什么泡水喝杀幽门螺杆菌| 剪不断理还乱是什么意思| 风湿有什么症状表现| 右肾钙化灶是什么意思| 什么是生活| close什么意思| 甯字五行属什么| 体位是什么意思| 车票改签是什么意思| 字母圈是什么意思| 翩翩起舞是什么意思| 什么品牌的母婴用品好| 婚托是什么意思| 酒精肝吃什么药| 梦见死人什么意思| 吃什么通便效果最好最快| 4.15是什么星座| 羊蛋是什么部位| 老有眼屎是什么原因| 脑血管堵塞会有什么后果| 便秘吃什么可以调理| 1900年属什么生肖| 猫咪取什么名字好听| 腿肿是什么病的前兆| 心脑血管疾病吃什么药| 屈原姓什么| 每天都做梦是什么原因| 鲨鱼是什么动物| 七月三十是什么星座| 胸部有硬块挂什么科| 农历11月18日是什么星座| 厉兵秣马什么意思| 姑奶奶是什么意思| 护士规培是什么意思| 便秘看什么科| 手术前吃什么补充营养| 点痣后要注意什么| 流产药叫什么名字| 2.20什么星座| 泡脚什么时候泡最好| 妈妈像什么| 关节痛挂号挂什么科| 食道癌有什么症状| 喉咙痒吃什么药好| 推举是什么意思| 女性体寒 吃什么好| 和尚命是什么意思| 大摇大摆是什么生肖| 油嘴滑舌指什么生肖| 女生胸部发育到什么年龄| 网球肘用什么膏药效果好| 早上起来眼睛肿了是什么原因| 朱代表什么生肖| 椰子鞋是什么牌子| 婴儿吃什么奶粉好吸收| 贫血做什么检查能查出来| 猫对什么颜色感兴趣| 喉咙疼吃什么水果好| 一什么毛驴| 现在最好的避孕方法是什么| 去湿气吃什么中药| 头晕脑胀吃什么药| 为什么睡觉总是做梦| peace是什么牌子| 百度Jump to content

From Wikipedia, the free encyclopedia
百度 孙宏斌说,自己最想对乐视投资者说的话是,“如果挣钱了,祝贺你;如果亏钱了,跟我没关系,别骂我,我还想骂人呢。

The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".[1]

This article describes the complex variant. Given a polynomial P, with complex coefficients it computes approximations to the n zeros of P(z), one at a time in roughly increasing order of magnitude. After each root is computed, its linear factor is removed from the polynomial. Using this deflation guarantees that each root is computed only once and that all roots are found.

The real variant follows the same pattern, but computes two roots at a time, either two real roots or a pair of conjugate complex roots. By avoiding complex arithmetic, the real variant can be faster (by a factor of 4) than the complex variant. The Jenkins–Traub algorithm has stimulated considerable research on theory and software for methods of this type.

Overview

[edit]

The Jenkins–Traub algorithm calculates all of the roots of a polynomial with complex coefficients. The algorithm starts by checking the polynomial for the occurrence of very large or very small roots. If necessary, the coefficients are rescaled by a rescaling of the variable. In the algorithm, proper roots are found one by one and generally in increasing size. After each root is found, the polynomial is deflated by dividing off the corresponding linear factor. Indeed, the factorization of the polynomial into the linear factor and the remaining deflated polynomial is already a result of the root-finding procedure. The root-finding procedure has three stages that correspond to different variants of the inverse power iteration. See Jenkins and Traub.[2] A description can also be found in Ralston and Rabinowitz[3] p. 383. The algorithm is similar in spirit to the two-stage algorithm studied by Traub.[4]

Root-finding procedure

[edit]

Starting with the current polynomial P(X) of degree n, the aim is to compute the smallest root of P(x). The polynomial can then be split into a linear factor and the remaining polynomial factor Other root-finding methods strive primarily to improve the root and thus the first factor. The main idea of the Jenkins-Traub method is to incrementally improve the second factor.

To that end, a sequence of so-called H polynomials is constructed. These polynomials are all of degree n ? 1 and are supposed to converge to the factor of P(X) containing (the linear factors of) all the remaining roots. The sequence of H polynomials occurs in two variants, an unnormalized variant that allows easy theoretical insights and a normalized variant of polynomials that keeps the coefficients in a numerically sensible range. The construction of the H polynomials is guided by a sequence of complex numbers called shifts. These shifts themselves depend, at least in the third stage, on the previous H polynomials. The H polynomials are defined as the solution to the implicit recursion and A direct solution to this implicit equation is where the polynomial division is exact.

Algorithmically, one would use long division by the linear factor as in the Horner scheme or Ruffini rule to evaluate the polynomials at and obtain the quotients at the same time. With the resulting quotients p(X) and h(X) as intermediate results the next H polynomial is obtained as Since the highest degree coefficient is obtained from P(X), the leading coefficient of is . If this is divided out the normalized H polynomial is

Stage one: no-shift process

[edit]

For set . Usually M=5 is chosen for polynomials of moderate degrees up to n = 50. This stage is not necessary from theoretical considerations alone, but is useful in practice. It emphasizes in the H polynomials the cofactor(s) (of the linear factor) of the smallest root(s).

Stage two: fixed-shift process

[edit]

The shift for this stage is determined as some point close to the smallest root of the polynomial. It is quasi-randomly located on the circle with the inner root radius, which in turn is estimated as the positive solution of the equation Since the left side is a convex function and increases monotonically from zero to infinity, this equation is easy to solve, for instance by Newton's method.

Now choose on the circle of this radius. The sequence of polynomials , , is generated with the fixed shift value . This creates an asymmetry relative to the previous stage which increases the chance that the H polynomial moves towards the cofactor of a single root. During this iteration, the current approximation for the root

is traced. The second stage is terminated as successful if the conditions and are simultaneously met. This limits the relative step size of the iteration, ensuring that the approximation sequence stays in the range of the smaller roots. If there was no success after some number of iterations, a different random point on the circle is tried. Typically one uses a number of 9 iterations for polynomials of moderate degree, with a doubling strategy for the case of multiple failures.

Stage three: variable-shift process

[edit]

The polynomials are now generated using the variable shifts which are generated by being the last root estimate of the second stage and where is the normalized H polynomial, that is divided by its leading coefficient.

If the step size in stage three does not fall fast enough to zero, then stage two is restarted using a different random point. If this does not succeed after a small number of restarts, the number of steps in stage two is doubled.

Convergence

[edit]

It can be shown that, provided L is chosen sufficiently large, sλ always converges to a root of P.

The algorithm converges for any distribution of roots, but may fail to find all roots of the polynomial. Furthermore, the convergence is slightly faster than the quadratic convergence of the Newton–Raphson method, however, it uses one-and-half as many operations per step, two polynomial evaluations for Newton vs. three polynomial evaluations in the third stage.

What gives the algorithm its power?

[edit]

Compare with the Newton–Raphson iteration

The iteration uses the given P and . In contrast the third-stage of Jenkins–Traub

is precisely a Newton–Raphson iteration performed on certain rational functions. More precisely, Newton–Raphson is being performed on a sequence of rational functions

For sufficiently large, is as close as desired to a first degree polynomial where is one of the zeros of . Even though Stage 3 is precisely a Newton–Raphson iteration, differentiation is not performed.

Analysis of the H polynomials

[edit]

Let be the roots of P(X). The so-called Lagrange factors of P(X) are the cofactors of these roots, If all roots are different, then the Lagrange factors form a basis of the space of polynomials of degree at most n ? 1. By analysis of the recursion procedure one finds that the H polynomials have the coordinate representation Each Lagrange factor has leading coefficient 1, so that the leading coefficient of the H polynomials is the sum of the coefficients. The normalized H polynomials are thus

Convergence orders

[edit]

If the condition holds for almost all iterates, the normalized H polynomials will converge at least geometrically towards .

Under the condition that one gets the asymptotic estimates for

  • stage 1:
  • for stage 2, if s is close enough to : and
  • and for stage 3: and giving rise to a higher than quadratic convergence order of , where is the golden ratio.

Interpretation as inverse power iteration

[edit]

All stages of the Jenkins–Traub complex algorithm may be represented as the linear algebra problem of determining the eigenvalues of a special matrix. This matrix is the coordinate representation of a linear map in the n-dimensional space of polynomials of degree n ? 1 or less. The principal idea of this map is to interpret the factorization with a root and the remaining factor of degree n ? 1 as the eigenvector equation for the multiplication with the variable X, followed by remainder computation with divisor P(X), This maps polynomials of degree at most n ? 1 to polynomials of degree at most n ? 1. The eigenvalues of this map are the roots of P(X), since the eigenvector equation reads which implies that , that is, is a linear factor of P(X). In the monomial basis the linear map is represented by a companion matrix of the polynomial P, as the resulting transformation matrix is To this matrix the inverse power iteration is applied in the three variants of no shift, constant shift and generalized Rayleigh shift in the three stages of the algorithm. It is more efficient to perform the linear algebra operations in polynomial arithmetic and not by matrix operations, however, the properties of the inverse power iteration remain the same.

Real coefficients

[edit]

The Jenkins–Traub algorithm described earlier works for polynomials with complex coefficients. The same authors also created a three-stage algorithm for polynomials with real coefficients. See Jenkins and Traub A Three-Stage Algorithm for Real Polynomials Using Quadratic Iteration.[5] The algorithm finds either a linear or quadratic factor working completely in real arithmetic. If the complex and real algorithms are applied to the same real polynomial, the real algorithm is about four times as fast. The real algorithm always converges and the rate of convergence is greater than second order.

A connection with the shifted QR algorithm

[edit]

There is a surprising connection with the shifted QR algorithm for computing matrix eigenvalues. See Dekker and Traub The shifted QR algorithm for Hermitian matrices.[6] Again the shifts may be viewed as Newton-Raphson iteration on a sequence of rational functions converging to a first degree polynomial.

Software and testing

[edit]

The software for the Jenkins–Traub algorithm was published as Jenkins and Traub Algorithm 419: Zeros of a Complex Polynomial.[7] The software for the real algorithm was published as Jenkins Algorithm 493: Zeros of a Real Polynomial.[8]

The methods have been extensively tested by many people.[who?] As predicted they enjoy faster than quadratic convergence for all distributions of zeros.

However, there are polynomials which can cause loss of precision[9] as illustrated by the following example. The polynomial has all its zeros lying on two half-circles of different radii. Wilkinson recommends that it is desirable for stable deflation that smaller zeros be computed first. The second-stage shifts are chosen so that the zeros on the smaller half circle are found first. After deflation the polynomial with the zeros on the half circle is known to be ill-conditioned if the degree is large; see Wilkinson,[10] p. 64. The original polynomial was of degree 60 and suffered severe deflation instability.

References

[edit]
  1. ^ Press, W. H., Teukolsky, S. A., Vetterling, W. T. and Flannery, B. P. (2007), Numerical Recipes: The Art of Scientific Computing, 3rd ed., Cambridge University Press, page 470.
  2. ^ Jenkins, M. A. and Traub, J. F. (1970), A Three-Stage Variables-Shift Iteration for Polynomial Zeros and Its Relation to Generalized Rayleigh Iteration, Numer. Math. 14, 252–263.
  3. ^ Ralston, A. and Rabinowitz, P. (1978), A First Course in Numerical Analysis, 2nd ed., McGraw-Hill, New York.
  4. ^ Traub, J. F. (1966), A Class of Globally Convergent Iteration Functions for the Solution of Polynomial Equations, Math. Comp., 20(93), 113–138.
  5. ^ Jenkins, M. A. and Traub, J. F. (1970), A Three-Stage Algorithm for Real Polynomials Using Quadratic Iteration, SIAM J. Numer. Anal., 7(4), 545–566.
  6. ^ Dekker, T. J. and Traub, J. F. (1971), The shifted QR algorithm for Hermitian matrices, Lin. Algebra Appl., 4(2), 137–154.
  7. ^ Jenkins, M. A. and Traub, J. F. (1972), Algorithm 419: Zeros of a Complex Polynomial, Comm. ACM, 15, 97–99.
  8. ^ Jenkins, M. A. (1975), Algorithm 493: Zeros of a Real Polynomial, ACM TOMS, 1, 178–189.
  9. ^ "William Kahan Oral history interview by Thomas Haigh". The History of Numerical Analysis and Scientific Computing. Philadelphia, PA. 8 August 2005. Retrieved 2025-08-06.
  10. ^ Wilkinson, J. H. (1963), Rounding Errors in Algebraic Processes, Prentice Hall, Englewood Cliffs, N.J.
[edit]
白英别名叫什么 怀孕有什么特征和反应 卫字五行属什么 小孩经常尿床是什么原因 眼睛干涩疼痛用什么滴眼液好
黑枸杞泡水喝有什么作用和功效 新生儿前面头发稀少是什么原因 机体是什么意思 银镯子变黑是什么原因 章鱼的血液是什么颜色
洋葱和什么相克 尿毒清颗粒主治什么病 血燥吃什么好 孕期能吃什么 梦见摘黄瓜是什么意思
咽炎咳嗽吃什么 俄罗斯信奉的是什么教 神经性皮炎不能吃什么食物 污秽是什么意思 治疗白斑最有效的方法是什么
做什么行业最赚钱hcv8jop1ns0r.cn 清鱼是什么鱼hcv9jop5ns1r.cn 颔是什么部位hcv9jop6ns5r.cn 吃什么可以增强免疫力ff14chat.com 视力5.3是什么概念hcv8jop1ns3r.cn
什么是电信诈骗hcv7jop4ns8r.cn 下巴上有痣代表什么wuhaiwuya.com 榴莲不可以和什么食物一起吃hcv8jop5ns1r.cn 灌肠用什么hcv8jop0ns3r.cn 柿子是什么颜色hcv9jop0ns5r.cn
头晕吃什么药好hcv7jop4ns6r.cn p站是什么hcv9jop5ns6r.cn 上相是什么意思hcv9jop0ns6r.cn 努力的意义是什么hcv9jop7ns1r.cn 品牌背书是什么意思hcv9jop5ns6r.cn
乳糖不耐受喝什么奶粉hcv8jop5ns8r.cn 间质性肺炎是什么意思hcv8jop1ns3r.cn 菩提是什么hcv8jop8ns7r.cn 做脑ct挂什么科gysmod.com 什么食物含维生素b12最多hcv9jop0ns0r.cn
百度