c3是什么意思| 温吞是什么意思| 脚肿是什么病| 肋骨外翻是什么原因| 吃维生素b12有什么好处和副作用| 切除子宫有什么影响| 肠镜什么情况下取活检| 白酒不能和什么一起吃| 嘴歪是什么病的前兆| 腰间盘膨出和突出有什么区别| 中观是什么意思| 阴毛长虱子用什么药| havva是什么牌子| 禁欲系是什么意思| 扩心病是什么病| 耐药菌感染什么意思| 牙齿一碰就疼是什么原因| 三湖慈鲷可以和什么鱼混养| 下面出血是什么原因| 手指尖疼是什么原因| 火龙果有什么好处| 贝前列素钠片主治什么病| 91网站是什么| 小孩脱发是什么原因引起的| 中医讲肾主什么| 鹰的天敌是什么动物| 中耳炎是什么症状| 谷氨酰转移酶高是什么病| 咽炎挂什么科室| 早上起床咳嗽是什么原因| 侧柏是什么植物| 土字旁的有什么字| 鬼迷心窍是什么生肖| 斑秃是什么原因造成的| beko是什么牌子| 结婚下雨有什么说法| 云为什么不会掉下来| 新婚志喜是什么意思| 什么时候吃饺子| 8朵玫瑰花代表什么意思| 釜底抽薪是什么意思| 奔波是什么意思| 小孩脚后跟疼是什么原因| 什么就像什么造句| lr是什么| 什么鸡适合炖汤| 梅毒早期什么症状| 沐字五行属什么| 草字头加弓念什么| 凉拖鞋什么材质的好| 大作是什么意思| 黑色素是什么| 经常腹痛什么原因| 9月份怀孕预产期是什么时候| 结缔组织病是什么病| 胃功能三项检查是什么| 社保局是干什么的| 开封有什么大学| 五味子有什么功效| 欧阳修号什么| 葡萄糖升高说明什么| 耵聍是什么意思| 乳腺癌有什么症状| 小米什么时候成熟| 为什么会突然长痣| 共建是什么意思| 去香港需要准备什么| 痛风喝什么茶最好| 韩束属于什么档次| 儿童流鼻血挂什么科| 嘴唇下面长痘痘是什么原因| 梦见家里死人了代表什么预兆| 抑郁吃什么药| 乳酸是什么东西| 农历12月26日是什么星座| 抖s是什么意思| 散光和近视有什么区别| 生长纹是什么原因| 女人吃什么疏肝理气| 黄褐斑内调吃什么中药| 黄瓜为什么苦| 五指毛桃根有什么功效| 包皮炎用什么药| 宋江的绰号是什么| 何如是什么意思| 夕阳无限好是什么意思| 为什么眼泪是咸的| 女人每天喝豆浆有什么好处| 耳朵一直痒是什么原因| 八月份是什么星座| 脂肪粒是什么原因引起的| 什么是生化妊娠| 早上起床喉咙有痰是什么原因| 脖子肿是什么原因| 己卯日五行属什么| 勾引什么意思| 多梦吃什么药| 扁平足有什么危害| 前囟门什么时候闭合| 胰岛素高是什么原因| 白色的猫是什么品种| 流浓黄鼻涕是什么原因| 储蓄卡是什么意思| 小本生意做什么好赚钱快| 什么是干眼症| 小孩头疼是什么原因| 黄体期出血是什么原因| 吃什么药能让月经马上来| 甲钴胺是什么药| 11点到12点是什么时辰| 生理年龄是什么意思| 爱睡觉是什么原因| 过期的牛奶有什么用| 多梦是什么原因造成的| 担是什么意思| 宫颈炎吃什么药好| 乙酸是什么| 手抖是什么症状| 做恐怖的梦预示着什么| 过敏性鼻炎吃什么中药| 白玉菩提是什么材质| 葡萄糖酸钙锌口服溶液什么时候喝| 什么情况下吃奥司他韦| 胃胀挂什么科| 小脚趾麻木是什么原因| 晶莹的近义词是什么| 悠是什么意思| 痛经吃什么止痛药| 酷暑是什么意思| 胰腺炎不能吃什么食物| 土字旁的字与什么有关| 不加大念什么| 孕妇梦见龙是什么征兆| 阴唇为什么一大一小| 宫腔镜检查主要查什么| 眼珠发黄是什么原因| 可塑性是什么意思| 声色什么| 把脉能看出什么| 木指什么生肖| 脾胃虚吃什么好| 吃什么食物增加黑色素| 尿酸高有什么症状| 堪舆是什么意思| 肠胃炎可以吃什么| 鸡眼用什么药好| 跌水是什么意思| 纸醉金迷下一句是什么| 肾结石什么不能吃| 意大利面是用什么做的| 排比句是什么意思| 外面下着雨犹如我心血在滴什么歌| 被蜜蜂蛰了涂什么药膏| 笑对人生是什么意思| 紫苏有什么功效与作用| 婆家是什么意思| cd代表什么意思| 男生早上为什么会晨勃| 吃什么降火| 皮肤长小肉粒是什么原因| 人心隔肚皮什么意思| 护手霜什么牌子的效果好| 苹果什么时候出新手机| 825是什么意思| 从容不迫什么意思| 常喝黑苦荞茶有什么好处| 水乳是什么| 苦瓜和什么搭配最好| 头顶长白头发是什么原因造成的| 什么是关键词| 孕妇快生了有什么症状| 蓝牙耳机什么品牌好| nova是什么牌子| 圆明园是什么时候被烧的| 黄色是什么颜色组成的| 梨花是什么颜色| 心悸症状是什么感觉| 大圈什么意思| 梨花压海棠是什么意思| 什么是gay| 经常腹痛什么原因| 脾不好吃什么药最见效| 别出心裁是什么意思| 长水痘可以吃什么菜| 处长什么级别| 愚公移山是什么意思| 大同有什么好玩的| 中元节会开什么生肖| 牛仔裤搭配什么鞋| 怀孕血糖高有什么症状| 95年属什么| 辐射对人体有什么伤害| 猴和什么属相相冲相克| 维生素b族为什么不能晚上吃| 氨基比林是什么药| 电轴右偏是什么意思| 性功能减退吃什么药好| 大便咖啡色什么原因| 咳嗽用什么药| 女性外阴瘙痒用什么药| m和s是什么意思| 膝盖疼是什么原因引起的| 风生水起是什么意思| 照见五蕴皆空什么意思| 火和什么相生| 咽炎用什么药好| 流产后不能吃什么东西| 下元节是什么节日| 红艳煞是什么意思| 23号来月经什么时候是排卵期| 宝宝益生菌什么时候吃最好| 豆腐鱼是什么鱼| 身上长小红点是什么原因| 为什么不能天天做有氧运动| 为什么精液是黄色的| 手机有什么品牌| 乔其纱是什么面料| 手足口吃什么药| 七月七是什么日子| 虾滑是什么| 4月25日什么星座| 人巨细胞病毒是什么病| 什么姿势舒服| 五一年属什么生肖| 菠菜是什么意思| 一飞冲天是什么生肖| 口干舌燥吃什么药| 鼠女和什么生肖最配| 笔仙是什么| 胃炎吃什么药效果最好| 四海是什么意思| 不行是什么意思| 为什么青霉素要做皮试| 午夜是什么意思| 葱白是什么| 怀孕天数从什么时候算起| 暄字五行属什么| 俄罗斯是什么国家| 鸭肉不能和什么一起吃| 十一月份出生的是什么星座| 什么耳机比较好| 黥面是什么意思| 寄什么快递最便宜| 肥皂剧是什么意思| 牛大力泡酒有什么功效| 80年属猴的是什么命| 前列腺彩超能查出什么| 下眼皮跳是什么原因| 毒灵芝长什么样| 七杀大运是什么意思| 子宫内膜c型什么意思| 退烧药吃什么| 肠胃炎吃什么食物| 毛囊炎用什么药膏| 补体c1q偏高说明什么| 日语八嘎是什么意思| 滚刀什么意思| 鱼露可以用什么代替| 农历六月十九是什么星座| 心血管狭窄吃什么药| 医学ace是什么意思| 古稀是什么意思| 1983属什么生肖| 脂肪肝吃什么药好| 百度Jump to content

中共中央印发《深化党和国家机构改革方案》

From Wikipedia, the free encyclopedia
A comparison of the convergence of gradient descent with optimal step size (in green) and conjugate vector (in red) for minimizing a quadratic function associated with a given linear system. Conjugate gradient, assuming exact arithmetic, converges in at most n steps, where n is the size of the matrix of the system (here n = 2).
百度 问:(吉格斯)决赛中要对阵乌拉圭对威尔士来说是个真正的考验,怎么看待两队的实力对比?(贝尔)将要与老对手苏亚雷斯交手,感受是否有所不同?答:乌拉圭是一个强劲的对手,他们的主教练有丰富的执教履历,场上球员有丰富的大赛经验,我们曾有过多次交手,这肯定是一场非常困难的比赛,同时我也很期待,迎接这个挑战。

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It is commonly attributed to Magnus Hestenes and Eduard Stiefel,[1][2] who programmed it on the Z4,[3] and extensively researched it.[4][5]

The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear optimization problems.

Description of the problem addressed by conjugate gradients

[edit]

Suppose we want to solve the system of linear equations

for the vector , where the known matrix is symmetric (i.e., ), positive-definite (i.e. for all non-zero vectors in ), and real, and is known as well. We denote the unique solution of this system by .

Derivation as a direct method

[edit]

The conjugate gradient method can be derived from several different perspectives, including specialization of the conjugate direction method for optimization, and variation of the Arnoldi/Lanczos iteration for eigenvalue problems. Despite differences in their approaches, these derivations share a common topic—proving the orthogonality of the residuals and conjugacy of the search directions. These two properties are crucial to developing the well-known succinct formulation of the method.

We say that two non-zero vectors and are conjugate (with respect to ) if

Since is symmetric and positive-definite, the left-hand side defines an inner product

Two vectors are conjugate if and only if they are orthogonal with respect to this inner product. Being conjugate is a symmetric relation: if is conjugate to , then is conjugate to . Suppose that

is a set of mutually conjugate vectors with respect to , i.e. for all . Then forms a basis for , and we may express the solution of in this basis:

Left-multiplying the problem with the vector yields

and so

This gives the following method[4] for solving the equation : find a sequence of conjugate directions, and then compute the coefficients .

As an iterative method

[edit]

If we choose the conjugate vectors carefully, then we may not need all of them to obtain a good approximation to the solution . So, we want to regard the conjugate gradient method as an iterative method. This also allows us to approximately solve systems where is so large that the direct method would take too much time.

We denote the initial guess for by (we can assume without loss of generality that , otherwise consider the system instead). Starting with we search for the solution and in each iteration we need a metric to tell us whether we are closer to the solution (that is unknown to us). This metric comes from the fact that the solution is also the unique minimizer of the following quadratic function

The existence of a unique minimizer is apparent as its Hessian matrix of second derivatives is symmetric positive-definite

and that the minimizer (use ) solves the initial problem follows from its first derivative

This suggests taking the first basis vector to be the negative of the gradient of at . The gradient of equals . Starting with an initial guess , this means we take . The other vectors in the basis will be conjugate to the gradient, hence the name conjugate gradient method. Note that is also the residual provided by this initial step of the algorithm.

Let be the residual at the th step:

As observed above, is the negative gradient of at , so the gradient descent method would require to move in the direction rk. Here, however, we insist that the directions must be conjugate to each other. A practical way to enforce this is by requiring that the next search direction be built out of the current residual and all previous search directions. The conjugation constraint is an orthonormal-type constraint and hence the algorithm can be viewed as an example of Gram-Schmidt orthonormalization. This gives the following expression:

(see the picture at the top of the article for the effect of the conjugacy constraint on convergence). Following this direction, the next optimal location is given by

with

where the last equality follows from the definition of . The expression for can be derived if one substitutes the expression for xk+1 into f and minimizing it with respect to

The resulting algorithm

[edit]

The above algorithm gives the most straightforward explanation of the conjugate gradient method. Seemingly, the algorithm as stated requires storage of all previous searching directions and residue vectors, as well as many matrix–vector multiplications, and thus can be computationally expensive. However, a closer analysis of the algorithm shows that is orthogonal to , i.e. , for . And is -orthogonal to , i.e. , for . This can be regarded that as the algorithm progresses, and span the same Krylov subspace, where form the orthogonal basis with respect to the standard inner product, and form the orthogonal basis with respect to the inner product induced by . Therefore, can be regarded as the projection of on the Krylov subspace.

That is, if the CG method starts with , then[6]The algorithm is detailed below for solving where is a real, symmetric, positive-definite matrix. The input vector can be an approximate initial solution or . It is a different formulation of the exact procedure described above.

This is the most commonly used algorithm. The same formula for is also used in the Fletcher–Reeves nonlinear conjugate gradient method.

Restarts

[edit]

We note that is computed by the gradient descent method applied to . Setting would similarly make computed by the gradient descent method from , i.e., can be used as a simple implementation of a restart of the conjugate gradient iterations.[4] Restarts could slow down convergence, but may improve stability if the conjugate gradient method misbehaves, e.g., due to round-off error.

Explicit residual calculation

[edit]

The formulas and , which both hold in exact arithmetic, make the formulas and mathematically equivalent. The former is used in the algorithm to avoid an extra multiplication by since the vector is already computed to evaluate . The latter may be more accurate, substituting the explicit calculation for the implicit one by the recursion subject to round-off error accumulation, and is thus recommended for an occasional evaluation.[7]

A norm of the residual is typically used for stopping criteria. The norm of the explicit residual provides a guaranteed level of accuracy both in exact arithmetic and in the presence of the rounding errors, where convergence naturally stagnates. In contrast, the implicit residual is known to keep getting smaller in amplitude well below the level of rounding errors and thus cannot be used to determine the stagnation of convergence.

Computation of alpha and beta

[edit]

In the algorithm, is chosen such that is orthogonal to . The denominator is simplified from

since . The is chosen such that is conjugate to . Initially, is

using

and equivalently

the numerator of is rewritten as

because and are orthogonal by design. The denominator is rewritten as

using that the search directions are conjugated and again that the residuals are orthogonal. This gives the in the algorithm after cancelling .

"""
    conjugate_gradient!(A, b, x)

Return the solution to `A * x = b` using the conjugate gradient method.
"""
function conjugate_gradient!(
    A::AbstractMatrix, b::AbstractVector, x::AbstractVector; tol=eps(eltype(b))
)
    # Initialize residual vector
    residual = b - A * x
    # Initialize search direction vector
    search_direction = copy(residual)
    # Compute initial squared residual norm
	norm(x) = sqrt(sum(x.^2))
    old_resid_norm = norm(residual)

    # Iterate until convergence
    while old_resid_norm > tol
        A_search_direction = A * search_direction
        step_size = old_resid_norm^2 / (search_direction' * A_search_direction)
        # Update solution
        @. x = x + step_size * search_direction
        # Update residual
        @. residual = residual - step_size * A_search_direction
        new_resid_norm = norm(residual)
        
        # Update search direction vector
        @. search_direction = residual + 
            (new_resid_norm / old_resid_norm)^2 * search_direction
        # Update squared residual norm for next iteration
        old_resid_norm = new_resid_norm
    end
    return x
end

Example code in MATLAB

[edit]
function x = conjugate_gradient(A, b, x0, tol)
% Return the solution to `A * x = b` using the conjugate gradient method.
% Reminder: A should be symmetric and positive definite.

    if nargin < 4
        tol = eps;
    end

    r = b - A * x0;
    p = r;
    rsold = r' * r;

    x = x0;

    while sqrt(rsold) > tol
        Ap = A * p;
        alpha = rsold / (p' * Ap);
        x = x + alpha * p;
        r = r - alpha * Ap;
        rsnew = r' * r;
        p = r + (rsnew / rsold) * p;
        rsold = rsnew;
    end
end


Numerical example

[edit]

Consider the linear system Ax = b given by

we will perform two steps of the conjugate gradient method beginning with the initial guess

in order to find an approximate solution to the system.

Solution

[edit]

For reference, the exact solution is

Our first step is to calculate the residual vector r0 associated with x0. This residual is computed from the formula r0 = b - Ax0, and in our case is equal to

Since this is the first iteration, we will use the residual vector r0 as our initial search direction p0; the method of selecting pk will change in further iterations.

We now compute the scalar α0 using the relationship

We can now compute x1 using the formula

This result completes the first iteration, the result being an "improved" approximate solution to the system, x1. We may now move on and compute the next residual vector r1 using the formula

Our next step in the process is to compute the scalar β0 that will eventually be used to determine the next search direction p1.

Now, using this scalar β0, we can compute the next search direction p1 using the relationship

We now compute the scalar α1 using our newly acquired p1 using the same method as that used for α0.

Finally, we find x2 using the same method as that used to find x1.

The result, x2, is a "better" approximation to the system's solution than x1 and x0. If exact arithmetic were to be used in this example instead of limited-precision, then the exact solution would theoretically have been reached after n = 2 iterations (n being the order of the system).

Finite Termination Property

[edit]

Under exact arithmetic, the number of iterations required is no more than the order of the matrix. This behavior is known as the finite termination property of the conjugate gradient method. It refers to the method's ability to reach the exact solution of a linear system in a finite number of steps—at most equal to the dimension of the system—when exact arithmetic is used. This property arises from the fact that, at each iteration, the method generates a residual vector that is orthogonal to all previous residuals. These residuals form a mutually orthogonal set.

In an n-dimensional space, it is impossible to construct more than n linearly independent and mutually orthogonal vectors unless one of them is the zero vector. Therefore, once a zero residual appears, the method has reached the solution and must terminate. This ensures that the conjugate gradient method converges in at most n steps.

To demonstrate this, consider the system:

We start from an initial guess . Since is symmetric positive-definite and the system is 2-dimensional, the conjugate gradient method should find the exact solution in no more than 2 steps. The following MATLAB code demonstrates this behavior:

A = [3, -2; -2, 4];
x_true = [1; 1];
b = A * x_true;

x = [1; 2];             % initial guess
r = b - A * x;
p = r;

for k = 1:2
    Ap = A * p;
    alpha = (r' * r) / (p' * Ap);
    x = x + alpha * p;
    r_new = r - alpha * Ap;
    beta = (r_new' * r_new) / (r' * r);
    p = r_new + beta * p;
    r = r_new;
end

disp('Exact solution:');
disp(x);

The output confirms that the method reaches after two iterations, consistent with the theoretical prediction. This example illustrates how the conjugate gradient method behaves as a direct method under idealized conditions.

Application to Sparse Systems

[edit]

The finite termination property also has practical implications in solving large sparse systems, which frequently arise in scientific and engineering applications. For instance, discretizing the two-dimensional Laplace equation using finite differences on a uniform grid leads to a sparse linear system , where is symmetric and positive definite.

Using a interior grid yields a system, and the coefficient matrix has a five-point stencil pattern. Each row of contains at most five nonzero entries corresponding to the central point and its immediate neighbors. For example, the matrix generated from such a grid may look like:

Although the system dimension is 25, the conjugate gradient method is theoretically guaranteed to terminate in at most 25 iterations under exact arithmetic. In practice, convergence often occurs in far fewer steps due to the matrix's spectral properties. This efficiency makes CGM particularly attractive for solving large-scale systems arising from partial differential equations, such as those found in heat conduction, fluid dynamics, and electrostatics.

Convergence properties

[edit]

The conjugate gradient method can theoretically be viewed as a direct method, as in the absence of round-off error it produces the exact solution after a finite number of iterations, which is not larger than the size of the matrix. In practice, the exact solution is never obtained since the conjugate gradient method is unstable with respect to even small perturbations, e.g., most directions are not in practice conjugate, due to a degenerative nature of generating the Krylov subspaces.

As an iterative method, the conjugate gradient method monotonically (in the energy norm) improves approximations to the exact solution and may reach the required tolerance after a relatively small (compared to the problem size) number of iterations. The improvement is typically linear and its speed is determined by the condition number of the system matrix : the larger is, the slower the improvement.[8]

However, an interesting case appears when the eigenvalues are spaced logarithmically for a large symmetric matrix. For example, let where is a random orthogonal matrix and is a diagonal matrix with eigenvalues ranging from to , spaced logarithmically. Despite the finite termination property of CGM, where the exact solution should theoretically be reached in at most steps, the method may exhibit stagnation in convergence. In such a scenario, even after many more iterations—e.g., ten times the matrix size—the error may only decrease modestly (e.g., to ). Moreover, the iterative error may oscillate significantly, making it unreliable as a stopping condition. This poor convergence is not explained by the condition number alone (e.g., ), but rather by the eigenvalue distribution itself. When the eigenvalues are more evenly spaced or randomly distributed, such convergence issues are typically absent, highlighting that CGM performance depends not only on but also on how the eigenvalues are distributed.[9]

If is large, preconditioning is commonly used to replace the original system with such that is smaller than , see below.

Convergence theorem

[edit]

Define a subset of polynomials as

where is the set of polynomials of maximal degree .

Let be the iterative approximations of the exact solution , and define the errors as . Now, the rate of convergence can be approximated as [4][10]

where denotes the spectrum, and denotes the condition number.

This shows iterations suffices to reduce the error to for any .

Note, the important limit when tends to

This limit shows a faster convergence rate compared to the iterative methods of Jacobi or Gauss–Seidel which scale as .

No round-off error is assumed in the convergence theorem, but the convergence bound is commonly valid in practice as theoretically explained[5] by Anne Greenbaum.

Practical convergence

[edit]

If initialized randomly, the first stage of iterations is often the fastest, as the error is eliminated within the Krylov subspace that initially reflects a smaller effective condition number. The second stage of convergence is typically well defined by the theoretical convergence bound with , but may be super-linear, depending on a distribution of the spectrum of the matrix and the spectral distribution of the error.[5] In the last stage, the smallest attainable accuracy is reached and the convergence stalls or the method may even start diverging. In typical scientific computing applications in double-precision floating-point format for matrices of large sizes, the conjugate gradient method uses a stopping criterion with a tolerance that terminates the iterations during the first or second stage.

The preconditioned conjugate gradient method

[edit]

In most cases, preconditioning is necessary to ensure fast convergence of the conjugate gradient method. If is symmetric positive-definite and has a better condition number than a preconditioned conjugate gradient method can be used. It takes the following form:[11]

repeat
if rk+1 is sufficiently small then exit loop end if
end repeat
The result is xk+1

The above formulation is equivalent to applying the regular conjugate gradient method to the preconditioned system[12]

where

The Cholesky decomposition of the preconditioner must be used to keep the symmetry (and positive definiteness) of the system. However, this decomposition does not need to be computed, and it is sufficient to know . It can be shown that has the same spectrum as .

The preconditioner matrix has to be symmetric positive-definite and fixed, i.e., cannot change from iteration to iteration. If any of these assumptions on the preconditioner is violated, the behavior of the preconditioned conjugate gradient method may become unpredictable.

An example of a commonly used preconditioner is the incomplete Cholesky factorization.[13]

Using the preconditioner in practice

[edit]

It is important to keep in mind that we don't want to invert the matrix explicitly in order to get for use it in the process, since inverting would take more time/computational resources than solving the conjugate gradient algorithm itself. As an example, let's say that we are using a preconditioner coming from incomplete Cholesky factorization. The resulting matrix is the lower triangular matrix , and the preconditioner matrix is:

Then we have to solve:

But:

Then:

Let's take an intermediary vector :

Since and and known, and is lower triangular, solving for is easy and computationally cheap by using forward substitution. Then, we substitute in the original equation:

Since and are known, and is upper triangular, solving for is easy and computationally cheap by using backward substitution.

Using this method, there is no need to invert or explicitly at all, and we still obtain .

The flexible preconditioned conjugate gradient method

[edit]

In numerically challenging applications, sophisticated preconditioners are used, which may lead to variable preconditioning, changing between iterations. Even if the preconditioner is symmetric positive-definite on every iteration, the fact that it may change makes the arguments above invalid, and in practical tests leads to a significant slow down of the convergence of the algorithm presented above. Using the Polak–Ribière formula

instead of the Fletcher–Reeves formula

may dramatically improve the convergence in this case.[14] This version of the preconditioned conjugate gradient method can be called[15] flexible, as it allows for variable preconditioning. The flexible version is also shown[16] to be robust even if the preconditioner is not symmetric positive definite (SPD).

The implementation of the flexible version requires storing an extra vector. For a fixed SPD preconditioner, so both formulas for βk are equivalent in exact arithmetic, i.e., without the round-off error.

The mathematical explanation of the better convergence behavior of the method with the Polak–Ribière formula is that the method is locally optimal in this case, in particular, it does not converge slower than the locally optimal steepest descent method.[17]

Vs. the locally optimal steepest descent method

[edit]

In both the original and the preconditioned conjugate gradient methods one only needs to set in order to make them locally optimal, using the line search, steepest descent methods. With this substitution, vectors p are always the same as vectors z, so there is no need to store vectors p. Thus, every iteration of these steepest descent methods is a bit cheaper compared to that for the conjugate gradient methods. However, the latter converge faster, unless a (highly) variable and/or non-SPD preconditioner is used, see above.

Conjugate gradient method as optimal feedback controller for double integrator

[edit]

The conjugate gradient method can also be derived using optimal control theory.[18] In this approach, the conjugate gradient method falls out as an optimal feedback controller, for the double integrator system, The quantities and are variable feedback gains.[18]

Conjugate gradient on the normal equations

[edit]

The conjugate gradient method can be applied to an arbitrary n-by-m matrix by applying it to normal equations ATA and right-hand side vector ATb, since ATA is a symmetric positive-semidefinite matrix for any A. The result is conjugate gradient on the normal equations (CGN or CGNR).

ATAx = ATb

As an iterative method, it is not necessary to form ATA explicitly in memory but only to perform the matrix–vector and transpose matrix–vector multiplications. Therefore, CGNR is particularly useful when A is a sparse matrix since these operations are usually extremely efficient. However the downside of forming the normal equations is that the condition number κ(ATA) is equal to κ2(A) and so the rate of convergence of CGNR may be slow and the quality of the approximate solution may be sensitive to roundoff errors. Finding a good preconditioner is often an important part of using the CGNR method.

Several algorithms have been proposed (e.g., CGLS, LSQR). The LSQR algorithm purportedly has the best numerical stability when A is ill-conditioned, i.e., A has a large condition number.

Conjugate gradient method for complex Hermitian matrices

[edit]

The conjugate gradient method with a trivial modification is extendable to solving, given complex-valued matrix A and vector b, the system of linear equations for the complex-valued vector x, where A is Hermitian (i.e., A' = A) and positive-definite matrix, and the symbol ' denotes the conjugate transpose. The trivial modification is simply substituting the conjugate transpose for the real transpose everywhere.

Advantages and disadvantages

[edit]

The advantages and disadvantages of the conjugate gradient methods are summarized in the lecture notes by Nemirovsky and BenTal.[19]:?Sec.7.3?

A pathological example

[edit]

This example is from [20] Let , and defineSince is invertible, there exists a unique solution to . Solving it by conjugate gradient descent gives us rather bad convergence:In words, during the CG process, the error grows exponentially, until it suddenly becomes zero as the unique solution is found.

See also

[edit]

References

[edit]
  1. ^ Hestenes, Magnus R.; Stiefel, Eduard (December 1952). "Methods of Conjugate Gradients for Solving Linear Systems" (PDF). Journal of Research of the National Bureau of Standards. 49 (6): 409. doi:10.6028/jres.049.044.
  2. ^ Straeter, T. A. (1971). On the Extension of the Davidon–Broyden Class of Rank One, Quasi-Newton Minimization Methods to an Infinite Dimensional Hilbert Space with Applications to Optimal Control Problems (PhD thesis). North Carolina State University. hdl:2060/19710026200 – via NASA Technical Reports Server.
  3. ^ Speiser, Ambros (2004). "Konrad Zuse und die ERMETH: Ein weltweiter Architektur-Vergleich" [Konrad Zuse and the ERMETH: A worldwide comparison of architectures]. In Hellige, Hans Dieter (ed.). Geschichten der Informatik. Visionen, Paradigmen, Leitmotive (in German). Berlin: Springer. p. 185. ISBN 3-540-00217-0.
  4. ^ a b c d Polyak, Boris (1987). Introduction to Optimization.
  5. ^ a b c Greenbaum, Anne (1997). Iterative Methods for Solving Linear Systems. doi:10.1137/1.9781611970937. ISBN 978-0-89871-396-1.
  6. ^ Paquette, Elliot; Trogdon, Thomas (March 2023). "Universality for the Conjugate Gradient and MINRES Algorithms on Sample Covariance Matrices". Communications on Pure and Applied Mathematics. 76 (5): 1085–1136. arXiv:2007.00640. doi:10.1002/cpa.22081. ISSN 0010-3640.
  7. ^ Shewchuk, Jonathan R (1994). An Introduction to the Conjugate Gradient Method Without the Agonizing Pain (PDF).
  8. ^ Saad, Yousef (2003). Iterative methods for sparse linear systems (2nd ed.). Philadelphia, Pa.: Society for Industrial and Applied Mathematics. pp. 195. ISBN 978-0-89871-534-7.
  9. ^ Holmes, M. (2023). Introduction to Scientific Computing and Data Analysis, 2nd Ed. Springer. ISBN 978-3-031-22429-4.
  10. ^ Hackbusch, W. (2025-08-06). Iterative solution of large sparse systems of equations (2nd ed.). Switzerland: Springer. ISBN 978-3-319-28483-5. OCLC 952572240.
  11. ^ Barrett, Richard; Berry, Michael; Chan, Tony F.; Demmel, James; Donato, June; Dongarra, Jack; Eijkhout, Victor; Pozo, Roldan; Romine, Charles; van der Vorst, Henk. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods (PDF) (2nd ed.). Philadelphia, PA: SIAM. p. 13. Retrieved 2025-08-06.
  12. ^ Golub, Gene H.; Van Loan, Charles F. (2013). Matrix Computations (4th ed.). Johns Hopkins University Press. sec. 11.5.2. ISBN 978-1-4214-0794-4.
  13. ^ Concus, P.; Golub, G. H.; Meurant, G. (1985). "Block Preconditioning for the Conjugate Gradient Method". SIAM Journal on Scientific and Statistical Computing. 6 (1): 220–252. doi:10.1137/0906018.
  14. ^ Golub, Gene H.; Ye, Qiang (1999). "Inexact Preconditioned Conjugate Gradient Method with Inner-Outer Iteration". SIAM Journal on Scientific Computing. 21 (4): 1305. CiteSeerX 10.1.1.56.1755. doi:10.1137/S1064827597323415.
  15. ^ Notay, Yvan (2000). "Flexible Conjugate Gradients". SIAM Journal on Scientific Computing. 22 (4): 1444–1460. CiteSeerX 10.1.1.35.7473. doi:10.1137/S1064827599362314.
  16. ^ Bouwmeester, Henricus; Dougherty, Andrew; Knyazev, Andrew V. (2015). "Nonsymmetric Preconditioning for Conjugate Gradient and Steepest Descent Methods 1". Procedia Computer Science. 51: 276–285. arXiv:1212.6680. doi:10.1016/j.procs.2015.05.241. S2CID 51978658.
  17. ^ Knyazev, Andrew V.; Lashuk, Ilya (2008). "Steepest Descent and Conjugate Gradient Methods with Variable Preconditioning". SIAM Journal on Matrix Analysis and Applications. 29 (4): 1267. arXiv:math/0605767. doi:10.1137/060675290. S2CID 17614913.
  18. ^ a b Ross, I. M., "An Optimal Control Theory for Accelerated Optimization," arXiv:1902.09004, 2019.
  19. ^ Nemirovsky and Ben-Tal (2023). "Optimization III: Convex Optimization" (PDF).
  20. ^ Pennington, Fabian Pedregosa, Courtney Paquette, Tom Trogdon, Jeffrey. "Random Matrix Theory and Machine Learning Tutorial". random-matrix-learning.github.io. Retrieved 2025-08-06.{{cite web}}: CS1 maint: multiple names: authors list (link)

Further reading

[edit]
[edit]
运六月有什么说法 什么狗不掉毛适合家养 假释是什么意思 宝宝反复发烧是什么原因 道观是什么意思
肾囊肿是什么原因引起的 虚病是什么意思 玄牝之门是什么意思 状元及第是什么意思 丙肝吃什么药
宝宝吃什么奶粉好 干呕是什么病的前兆 什么药可以延长时间 耳朵蝉鸣是什么原因引起的 东莞市委书记什么级别
喝什么牛奶好 高考早点吃什么好 mmp是什么意思 多多益善的意思是什么 两栖动物是什么意思
右束支传导阻滞是什么病hcv7jop9ns6r.cn 锦衣夜行什么意思hcv9jop0ns0r.cn 珍珠龟吃什么hcv9jop1ns0r.cn 转隶是什么意思hcv7jop9ns4r.cn 亚五行属什么hcv7jop7ns0r.cn
养肝护肝吃什么药效果最好hcv8jop7ns0r.cn 金为什么克木hcv9jop1ns7r.cn 高血压吃什么助勃药好hcv9jop0ns5r.cn 贫血吃什么补血最快hcv7jop6ns3r.cn 僵尸肉吃了有什么危害bysq.com
玄米是什么米hcv8jop5ns8r.cn 水逆什么意思hcv8jop2ns8r.cn 孕激素是什么意思hcv9jop7ns1r.cn 小孩什么时候换牙hcv9jop5ns7r.cn 来姨妈不能吃什么水果hcv9jop1ns6r.cn
肝郁气滞是什么意思hcv7jop9ns6r.cn butterfly什么意思hcv8jop8ns6r.cn 腐女什么意思hcv8jop8ns1r.cn 五月天主唱叫什么名字hcv8jop8ns6r.cn 爷们儿大结局是什么hcv8jop8ns8r.cn
百度