血是什么颜色| 五行海中金是什么意思| 6月15日是什么日子| 清炖排骨都放什么调料| 蝙蝠怕什么| 91年是什么年| 直肠指检能检查出什么| 肾功能不全有什么症状| cdr是什么意思| 大学生村官是什么编制| 吃坏肚子吃什么药| 久负盛名的负是什么意思| 对虾是什么虾| 从商是什么意思| 吃什么肉好| 阳瘘的最佳治疗方法是什么| 南京五行属什么| 甲鱼和什么一起炖最好| 主动脉硬化什么意思| 甜杆和甘蔗有什么区别| electrolux是什么牌子| 乳清粉是什么东西| 吃什么减肚子上的赘肉最快| 月经期喝红糖水有什么好处| 梦见自己有孩子了是什么预兆| 飞天是什么意思| 西游记有什么故事| 氨甲环酸又叫什么名| 火碱是什么东西| 经常腿抽筋是什么原因| 耷拉的近义词是什么| 内火重吃什么药见效快| 子宫多发肌瘤是什么意思| 七月份出生是什么星座| 情人是什么意思| 保守治疗是什么意思| 左下眼皮跳是什么预兆| 五花八门什么意思| fbi相当于中国的什么| 什么是押韵| 吃完羊肉不能吃什么水果| 欲情故纵是什么意思| 抠鼻表情是什么意思| 1977年是什么年| 铉是什么意思| 五月五日什么星座| 什么是丝状疣| 半干型黄酒是什么意思| 梦见抓鱼是什么意思| 晒后修复用什么比较好| 抑扬顿挫什么意思| 鼻基底用什么填充最好| 长期腹泻是什么病| 红花和藏红花有什么区别| 杆鱼是什么鱼| bpo是什么意思啊| tea是什么意思| 2010属什么| 老人大小便失禁是什么原因造成的| 怀孕了尿液是什么颜色| 经常感冒发烧是什么原因| mj什么意思| fl是胎儿的什么| 亥时是什么时候| 中国在什么半球| 海龟是什么动物| 高血糖挂什么科室的号| 吃善存片有什么好处| 打飞机是什么意思| 神经外科和神经内科有什么区别| 凉皮是什么做的| 脸部过敏红痒抹什么药| 碟中谍是什么意思| 新生儿缺氧会有什么后遗症| 吃饭出虚汗是什么原因| 碳酸盐质玉是什么玉| 小猫吃什么食物| hpv病毒是什么病| 肛门里面有个肉疙瘩是什么| 巨细胞病毒抗体阳性是什么意思| 身体容易青紫是什么原因| 大浪淘沙下一句是什么| 临床药学是干什么的| 力什么神什么| 奶昔是什么东西| 什么样的情况下会怀孕| 眼睛流泪用什么眼药水| 甲氧氯普胺片又叫什么| 苦杏仁味是什么中毒| 路演是什么意思| 白头发有什么方法变黑| 肚子上面疼是什么原因| 人参长什么样子图片| 蜂王浆是什么| camel是什么颜色| 鸟加一笔是什么字| 为什么会堵奶| 国家安全法属于什么法| 1907年属什么生肖| 脾大是什么原因| 今天是什么甲子| 烹调是什么意思| 交会是什么意思| 多囊有什么危害| xxs是什么意思| 左肺下叶钙化灶是什么意思| 体温偏低是什么原因| 念旧的人是什么样的人| 走路出汗多是什么原因| 鼠是什么命| 梦见穿山甲预示着什么| 悲催是什么意思| 铜绿假单胞菌用什么抗生素| 96166是什么电话| 喝酒不能吃什么东西| 口蘑是什么| 李白被人们称为什么| 湿热会引起什么症状| 什么是六爻| 嗯是什么意思| 电解质是什么意思| 什么叫辟谷| 两点是什么时辰| 绍兴本地人喝什么黄酒| 打嗝吃什么药好| 有所作为的意思是什么| 过敏吃什么药| 梦见打蛇是什么预兆| 手指甲发白是什么原因| 三七粉主要治疗什么病| 孕妇多吃什么水果比较好| 兔肉和什么相克| 女人更年期什么症状| 维c有什么功效和作用| 湖南湖北以什么湖为界| 煜这个字读什么| 一月七号是什么星座| 老年人吃什么钙片补钙好| 尉迟恭是什么生肖| 女右眉毛跳是什么预兆| 屁股上长痘是什么原因| 一什么不什么| 皮蛋吃了有什么好处和坏处| 输卵管堵塞什么症状| 台球杆什么牌子的好| 打hcg针有什么作用| 小鸡炖什么好吃| 解约是什么意思| 甲状腺斑块是什么意思| 龙虎山是什么地貌| 岚字五行属什么| peak是什么牌子| 关节错缝术是什么意思| 寸金难买寸光阴什么意思| 巧克力的原料是什么| 减肥应该吃什么主食| 血沉50说明什么原因| 举重若轻什么意思| 皮肤过敏有什么好办法| 省长属于什么级别| 空腹洗澡有什么危害| 什么样的空气| 胃食管反流用什么药| 为什么乳晕会变大| 两肺结节是什么意思| 什么值得买官网| 纯净水和矿泉水有什么区别| 吃苹果什么意思| 西咪替丁是治什么病| 到底是什么意思| 车厘子是什么季节的| 胸疼是什么原因引起的| 感冒咳嗽挂号挂什么科| 加湿器用什么水| 憨厚是什么意思| 下身灼热感什么原因| 弊病是什么意思| 消谷善饥是什么意思| 口比念什么| 三杯鸡的三杯是什么| 梦见自己得了重病预示什么| 胰腺炎是什么病| 出车前检查的目的是什么| cook什么意思| 咳嗽干呕是什么原因| 打火机的气体是什么| 威士忌是用什么酿造的| 水能是什么| 眼睛散瞳有什么危害| oc是什么意思| 太阳像什么的比喻句| 骨穿是检查什么的| 五色土有什么风水作用| 产前筛查是检查什么| 长期打嗝是什么原因| 肝郁血虚吃什么中成药| 脂肪瘤应该挂什么科| 什么天什么什么| 肝病不能吃什么| 家奴是什么生肖| 脚癣用什么药最好| 医学hr是什么意思| 为什么大拇指只有两节| 路怒症是什么| 吃什么可以去湿气| 一日无书下一句是什么| nike是什么牌子| 记忆力衰退吃什么药| 谨字五行属什么| 玫瑰红是什么颜色| 血糖偏高吃什么水果好| 支气管炎吃什么药最好| 白电油对人体有什么危害| 吃什么有助于降血压| 为什么感冒会咳嗽| 小便有血是什么原因| 六块钱麻辣烫什么意思| 缠腰龙是什么病| 谷草转氨酶高吃什么药| 附件炎是什么原因引起的| 结肠炎吃什么中成药| 什么是外心| 最大的罩杯是什么杯| ml是什么意思| 什么祛斑产品效果好| 什么人不能吃西瓜| 肠化是什么意思| 柿子与什么食物相克| 国二是什么意思| 血压不稳定是什么原因| 圆舞曲是什么意思| 女的排卵期一般是什么时间| 重阳节吃什么| 缺铁性贫血严重会导致什么后果| 榴莲补什么| 为什么8到10周容易胎停| 什么的工作| 12月14是什么星座| 孕期阴道炎可以用什么药| 什么的辨认| 右肋骨疼是什么原因| vs什么意思| 入睡困难是什么原因| 减肥吃什么药瘦得快| pv什么意思| loveyourself什么意思| 为什么有的人皮肤黑| 站着说话不腰疼什么意思| 英雄本色是什么意思| 家有喜事指什么生肖| 饭圈是什么意思| 绿色搭配什么颜色好看| 建档是什么意思| 小孩容易出汗是什么原因| 梦见一个小男孩是什么意思| 自卑的人有什么表现| 三三两两是什么生肖| 少腹是什么意思| 施华洛世奇水晶是什么材质| 孔雀的尾巴像什么| 但愿人长久的下一句是什么| 闪购是什么意思| 桑蚕丝是什么面料| 矬子是什么意思| 百度Jump to content

全球最火电商网站是如何玩转线下体验的?

From Wikipedia, the free encyclopedia
Visual comparison of convolution, cross-correlation, and autocorrelation. For the operations involving function , and assuming the height of is 1.0, the value of the result at 5 different points is indicated by the shaded area below each point. The symmetry of is the reason and are identical in this example.
百度 “这些结果是根据社会心理学原理来解释的,即熟悉感和单纯暴露对情感和人际吸引力的影响。

In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions and that produces a third function , as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The term convolution refers to both the resulting function and to the process of computing it. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (see commutativity). Graphically, it expresses how the 'shape' of one function is modified by the other.

Some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, convolution differs from cross-correlation only in that either or is reflected about the y-axis in convolution; thus it is a cross-correlation of and , or and .[A] For complex-valued functions, the cross-correlation operator is the adjoint of the convolution operator.

Convolution has applications that include probability, statistics, acoustics, spectroscopy, signal processing and image processing, geophysics, engineering, physics, computer vision and differential equations.[1]

The convolution can be defined for functions on Euclidean space and other groups (as algebraic structures).[citation needed] For example, periodic functions, such as the discrete-time Fourier transform, can be defined on a circle and convolved by periodic convolution. (See row 18 at DTFT § Properties.) A discrete convolution can be defined for functions on the set of integers.

Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra, and in the design and implementation of finite impulse response filters in signal processing.[citation needed]

Computing the inverse of the convolution operation is known as deconvolution.

Definition

[edit]

The convolution of and is written , denoting the operator with the symbol .[B] It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind of integral transform:

An equivalent definition is (see commutativity):

While the symbol is used above, it need not represent the time domain. At each , the convolution formula can be described as the area under the function weighted by the function shifted by the amount . As changes, the weighting function emphasizes different parts of the input function ; If is a positive value, then is equal to that slides or is shifted along the -axis toward the right (toward ) by the amount of , while if is a negative value, then is equal to that slides or is shifted toward the left (toward ) by the amount of .

For functions , supported on only (i.e., zero for negative arguments), the integration limits can be truncated, resulting in:

For the multi-dimensional formulation of convolution, see domain of definition (below).

Notation

[edit]

A common engineering notational convention is:[2]

which has to be interpreted carefully to avoid confusion. For instance, is equivalent to , but is in fact equivalent to .[3]

Relations with other transforms

[edit]

Given two functions and with bilateral Laplace transforms (two-sided Laplace transform)

and

respectively, the convolution operation can be defined as the inverse Laplace transform of the product of and .[4][5] More precisely,

Let , then

Note that is the bilateral Laplace transform of . A similar derivation can be done using the unilateral Laplace transform (one-sided Laplace transform).

The convolution operation also describes the output (in terms of the input) of an important class of operations known as linear time-invariant (LTI). See LTI system theory for a derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as a transfer function). See Convolution theorem for a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms.

Visual explanation

[edit]
  1. Express each function in terms of a dummy variable
  2. Reflect one of the functions:
  3. Add an offset of the independent variable, , which allows to slide along the -axis. If t is a positive value, then is equal to that slides or is shifted along the -axis toward the right (toward ) by the amount of . If is a negative value, then is equal to that slides or is shifted toward the left (toward ) by the amount of .
  4. Start at and slide it all the way to . Wherever the two functions intersect, find the integral of their product. In other words, at time , compute the area under the function weighted by the weighting function

The resulting waveform (not shown here) is the convolution of functions and .

If is a unit impulse, the result of this process is simply . Formally:

In this example, the red-colored "pulse", is an even function so convolution is equivalent to correlation. A snapshot of this "movie" shows functions and (in blue) for some value of parameter which is arbitrarily defined as the distance along the axis from the point to the center of the red pulse. The amount of yellow is the area of the product computed by the convolution/correlation integral. The movie is created by continuously changing and recomputing the integral. The result (shown in black) is a function of but is plotted on the same axis as for convenience and comparison.
In this depiction, could represent the response of a resistor-capacitor circuit to a narrow pulse that occurs at In other words, if the result of convolution is just But when is the wider pulse (in red), the response is a "smeared" version of It begins at because we defined as the distance from the axis to the center of the wide pulse (instead of the leading edge).

Historical developments

[edit]

One of the earliest uses of the convolution integral appeared in D'Alembert's derivation of Taylor's theorem in Recherches sur différents points importants du système du monde, published in 1754.[6]

Also, an expression of the type:

is used by Sylvestre Fran?ois Lacroix on page 505 of his book entitled Treatise on differences and series, which is the last of 3 volumes of the encyclopedic series: Traité du calcul différentiel et du calcul intégral, Chez Courcier, Paris, 1797–1800.[7] Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace, Jean-Baptiste Joseph Fourier, Siméon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 1960s. Prior to that it was sometimes known as Faltung (which means folding in German), composition product, superposition integral, and Carson's integral.[8] Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses.[9][10]

The operation:

is a particular case of composition products considered by the Italian mathematician Vito Volterra in 1913.[11]

Circular convolution

[edit]

When a function is periodic, with period , then for functions, , such that exists, the convolution is also periodic and identical to:

where is an arbitrary choice. The summation is called a periodic summation of the function .

When is a periodic summation of another function, , then is known as a circular or cyclic convolution of and .

And if the periodic summation above is replaced by , the operation is called a periodic convolution of and .

Discrete convolution

[edit]
Discrete 2D Convolution Animation

For complex-valued functions and defined on the set of integers, the discrete convolution of and is given by:[12]

or equivalently (see commutativity) by:

The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences.

Thus, when g is non-zero over a finite interval [-M,+M] (representing, for instance, a finite impulse response), a finite summation may be used:[13]

Circular discrete convolution

[edit]

When a function is periodic, with period then for functions, such that exists, the convolution is also periodic and identical to:

The summation on is called a periodic summation of the function

If is a periodic summation of another function, then is known as a circular convolution of and

When the non-zero durations of both and are limited to the interval   reduces to these common forms:

The notation for cyclic convolution denotes convolution over the cyclic group of integers modulo N.

Circular convolution arises most often in the context of fast convolution with a fast Fourier transform (FFT) algorithm.

Fast convolution algorithms

[edit]

In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation in multiplication of multi-digit numbers, which can therefore be efficiently implemented with transform techniques (Knuth 1997, §4.3.3.C; von zur Gathen & Gerhard 2003, §8.2).

Eq.1 requires N arithmetic operations per output value and N2 operations for N outputs. That can be significantly reduced with any of several fast algorithms. Digital signal processing and other applications typically use fast convolution algorithms to reduce the cost of the convolution to O(N log N) complexity.

The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem. Specifically, the circular convolution of two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as the Sch?nhage–Strassen algorithm or the Mersenne transform,[14] use fast Fourier transforms in other rings. The Winograd method is used as an alternative to the FFT.[15] It significantly speeds up 1D,[16] 2D,[17] and 3D[18] convolution.

If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available.[19] Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method.[20] A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations.[21]

Domain of definition

[edit]

The convolution of two complex-valued functions on Rd is itself a complex-valued function on Rd, defined by:

and is well-defined only if f and g decay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up in g at infinity can be easily offset by sufficiently rapid decay in f. The question of existence thus may involve different conditions on f and g:

Compactly supported functions

[edit]

If f and g are compactly supported continuous functions, then their convolution exists, and is also compactly supported and continuous (H?rmander 1983, Chapter 1). More generally, if either function (say f) is compactly supported and the other is locally integrable, then the convolution f?g is well-defined and continuous.

Convolution of f and g is also well defined when both functions are locally square integrable on R and supported on an interval of the form [a, +∞) (or both supported on [?∞, a]).

Integrable functions

[edit]

The convolution of f and g exists if f and g are both Lebesgue integrable functions in L1(Rd), and in this case f?g is also integrable (Stein & Weiss 1971, Theorem 1.3). This is a consequence of Tonelli's theorem. This is also true for functions in L1, under the discrete convolution, or more generally for the convolution on any group.

Likewise, if fL1(Rd)  and  gLp(Rd)  where 1 ≤ p ≤ ∞,  then  f*gLp(Rd),  and

In the particular case p = 1, this shows that L1 is a Banach algebra under the convolution (and equality of the two sides holds if f and g are non-negative almost everywhere).

More generally, Young's inequality implies that the convolution is a continuous bilinear map between suitable Lp spaces. Specifically, if 1 ≤ p, q, r ≤ ∞ satisfy:

then

so that the convolution is a continuous bilinear mapping from Lp×Lq to Lr. The Young inequality for convolution is also true in other contexts (circle group, convolution on Z). The preceding inequality is not sharp on the real line: when 1 < p, q, r < ∞, there exists a constant Bp,q < 1 such that:

The optimal value of Bp,q was discovered in 1975[22] and independently in 1976,[23] see Brascamp–Lieb inequality.

A stronger estimate is true provided 1 < p, q, r < ∞:

where is the weak Lq norm. Convolution also defines a bilinear continuous map for , owing to the weak Young inequality:[24]

Functions of rapid decay

[edit]

In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that if f and g both decay rapidly, then f?g also decays rapidly. In particular, if f and g are rapidly decreasing functions, then so is the convolution f?g. Combined with the fact that convolution commutes with differentiation (see #Properties), it follows that the class of Schwartz functions is closed under convolution (Stein & Weiss 1971, Theorem 3.3).

Distributions

[edit]

If f is a smooth function that is compactly supported and g is a distribution, then f?g is a smooth function defined by

More generally, it is possible to extend the definition of the convolution in a unique way with the same as f above, so that the associative law

remains valid in the case where f is a distribution, and g a compactly supported distribution (H?rmander 1983, §4.2).

Measures

[edit]

The convolution of any two Borel measures μ and ν of bounded variation is the measure defined by (Rudin 1962)

In particular,

where is a measurable set and is the indicator function of .

This agrees with the convolution defined above when μ and ν are regarded as distributions, as well as the convolution of L1 functions when μ and ν are absolutely continuous with respect to the Lebesgue measure.

The convolution of measures also satisfies the following version of Young's inequality

where the norm is the total variation of a measure. Because the space of measures of bounded variation is a Banach space, convolution of measures can be treated with standard methods of functional analysis that may not apply for the convolution of distributions.

Properties

[edit]

Algebraic properties

[edit]

The convolution defines a product on the linear space of integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutative associative algebra without identity (Strichartz 1994, §3.3). Other linear spaces of functions, such as the space of continuous functions of compact support, are closed under the convolution, and so also form commutative associative algebras.

Commutativity
Proof: By definition: Changing the variable of integration to the result follows.
Associativity
Proof: This follows from using Fubini's theorem (i.e., double integrals can be evaluated as iterated integrals in either order).
Distributivity
Proof: This follows from linearity of the integral.
Associativity with scalar multiplication
for any real (or complex) number .
Multiplicative identity
No algebra of functions possesses an identity for the convolution. The lack of identity is typically not a major inconvenience, since most collections of functions on which the convolution is performed can be convolved with a delta distribution (a unitary impulse, centered at zero) or, at the very least (as is the case of L1) admit approximations to the identity. The linear space of compactly supported distributions does, however, admit an identity under the convolution. Specifically, where δ is the delta distribution.
Inverse element
Some distributions S have an inverse element S?1 for the convolution which then must satisfy from which an explicit formula for S?1 may be obtained.
The set of invertible distributions forms an abelian group under the convolution.
Complex conjugation
Time reversal
If    then  

Proof (using convolution theorem):

Relationship with differentiation
Proof:
Relationship with integration
If and then

Integration

[edit]

If f and g are integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals:[25]

This follows from Fubini's theorem. The same result holds if f and g are only assumed to be nonnegative measurable functions, by Tonelli's theorem.

Differentiation

[edit]

In the one-variable case,

where is the derivative. More generally, in the case of functions of several variables, an analogous formula holds with the partial derivative:

A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution of f and g is differentiable as many times as f and g are in total.

These identities hold for example under the condition that f and g are absolutely integrable and at least one of them has an absolutely integrable (L1) weak derivative, as a consequence of Young's convolution inequality. For instance, when f is continuously differentiable with compact support, and g is an arbitrary locally integrable function,

These identities also hold much more broadly in the sense of tempered distributions if one of f or g is a rapidly decreasing tempered distribution, a compactly supported tempered distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution.

In the discrete case, the difference operator D f(n) = f(n + 1) ? f(n) satisfies an analogous relationship:

Convolution theorem

[edit]

The convolution theorem states that[26]

where denotes the Fourier transform of .

Convolution in other types of transformations

[edit]

Versions of this theorem also hold for the Laplace transform, two-sided Laplace transform, Z-transform and Mellin transform.

Convolution on matrices

[edit]

If is the Fourier transform matrix, then

,

where is face-splitting product,[27][28][29][30][31] denotes Kronecker product, denotes Hadamard product (this result is an evolving of count sketch properties[32]).

This can be generalized for appropriate matrices :

from the properties of the face-splitting product.

Translational equivariance

[edit]

The convolution commutes with translations, meaning that

where τxf is the translation of the function f by x defined by

If f is a Schwartz function, then τxf is the convolution with a translated Dirac delta function τxf = f ? τx δ. So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution.

Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds

Suppose that S is a bounded linear operator acting on functions which commutes with translations: S(τxf) = τx(Sf) for all x. Then S is given as convolution with a function (or distribution) gS; that is Sf = gS ? f.

Thus some translation invariant operations can be represented as convolution. Convolutions play an important role in the study of time-invariant systems, and especially LTI system theory. The representing function gS is the impulse response of the transformation S.

A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1 ≤ p < ∞ is the convolution with a tempered distribution whose Fourier transform is bounded. To wit, they are all given by bounded Fourier multipliers.

Convolutions on groups

[edit]

If G is a suitable group endowed with a measure λ, and if f and g are real or complex valued integrable functions on G, then we can define their convolution by

It is not commutative in general. In typical cases of interest G is a locally compact Hausdorff topological group and λ is a (left-) Haar measure. In that case, unless G is unimodular, the convolution defined in this way is not the same as . The preference of one over the other is made so that convolution with a fixed function g commutes with left translation in the group:

Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former.

On locally compact abelian groups, a version of the convolution theorem holds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. The circle group T with the Lebesgue measure is an immediate example. For a fixed g in L1(T), we have the following familiar operator acting on the Hilbert space L2(T):

The operator T is compact. A direct calculation shows that its adjoint T* is convolution with

By the commutativity property cited above, T is normal: T* T = TT* . Also, T commutes with the translation operators. Consider the family S of operators consisting of all such convolutions and the translation operators. Then S is a commuting family of normal operators. According to spectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizes S. This characterizes convolutions on the circle. Specifically, we have

which are precisely the characters of T. Each convolution is a compact multiplication operator in this basis. This can be viewed as a version of the convolution theorem discussed above.

A discrete example is a finite cyclic group of order n. Convolution operators are here represented by circulant matrices, and can be diagonalized by the discrete Fourier transform.

A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensional unitary representations form an orthonormal basis in L2 by the Peter–Weyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects of harmonic analysis that depend on the Fourier transform.

Convolution of measures

[edit]

Let G be a (multiplicatively written) topological group. If μ and ν are Radon measures on G, then their convolution μ?ν is defined as the pushforward measure of the group action and can be written as[33]

for each measurable subset E of G. The convolution is also a Radon measure, whose total variation satisfies

In the case when G is locally compact with (left-)Haar measure λ, and μ and ν are absolutely continuous with respect to a λ, so that each has a density function, then the convolution μ?ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions. In fact, if either measure is absolutely continuous with respect to the Haar measure, then so is their convolution.[34]

If μ and ν are probability measures on the topological group (R,+), then the convolution μ?ν is the probability distribution of the sum X + Y of two independent random variables X and Y whose respective distributions are μ and ν.

Infimal convolution

[edit]

In convex analysis, the infimal convolution of proper (not identically ) convex functions on is defined by:[35] It can be shown that the infimal convolution of convex functions is convex. Furthermore, it satisfies an identity analogous to that of the Fourier transform of a traditional convolution, with the role of the Fourier transform is played instead by the Legendre transform: We have:

Bialgebras

[edit]

Let (X, Δ, ?, ε, η) be a bialgebra with comultiplication Δ, multiplication ?, unit η, and counit ε. The convolution is a product defined on the endomorphism algebra End(X) as follows. Let φ, ψ ∈ End(X), that is, φ, ψ: XX are functions that respect all algebraic structure of X, then the convolution φ?ψ is defined as the composition

The convolution appears notably in the definition of Hopf algebras (Kassel 1995, §III.3). A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphism S such that

Applications

[edit]
Gaussian blur can be used to obtain a smooth grayscale digital image of a halftone print.

Convolution and related operations are found in many applications in science, engineering and mathematics.

See also

[edit]

Notes

[edit]
  1. ^ Reasons for the reflection include:
    • It is necessary to implement the equivalent of the pointwise product of the Fourier transforms of and .
    • When the convolution is viewed as a moving weighted average, the weighting function, , is often specified in terms of another function, , called the impulse response of a linear time-invariant system.
  2. ^ The symbol U+2217 ? ASTERISK OPERATOR is different than U+002A * ASTERISK, which is often used to denote complex conjugation. See Asterisk § Mathematical typography.

References

[edit]
  1. ^ Bahri, Mawardi; Ashino, Ryuichi; Vaillancourt, Rémi (2013). "Convolution Theorems for Quaternion Fourier Transform: Properties and Applications" (PDF). Abstract and Applied Analysis. 2013: 1–10. doi:10.1155/2013/162769. Archived (PDF) from the original on 2025-08-06. Retrieved 2025-08-06.
  2. ^ Smith, Stephen W (1997). "13.Convolution". The Scientist and Engineer's Guide to Digital Signal Processing (1 ed.). California Technical Publishing. ISBN 0-9660176-3-3. Retrieved 22 April 2016.
  3. ^ Irwin, J. David (1997). "4.3". The Industrial Electronics Handbook (1 ed.). Boca Raton, FL: CRC Press. p. 75. ISBN 0-8493-8343-9.
  4. ^ Differential Equations (Spring 2010), MIT 18.03. "Lecture 21: Convolution Formula". MIT Open Courseware. MIT. Retrieved 22 December 2021.{{cite web}}: CS1 maint: numeric names: authors list (link)
  5. ^ "18.03SC Differential Equations Fall 2011" (PDF). Green's Formula, Laplace Transform of Convolution. Archived (PDF) from the original on 2025-08-06.
  6. ^ Dominguez-Torres, p 2
  7. ^ Dominguez-Torres, p 4
  8. ^ R. N. Bracewell (2005), "Early work on imaging theory in radio astronomy", in W. T. Sullivan (ed.), The Early Years of Radio Astronomy: Reflections Fifty Years After Jansky's Discovery, Cambridge University Press, p. 172, ISBN 978-0-521-61602-7
  9. ^ John Hilton Grace and Alfred Young (1903), The algebra of invariants, Cambridge University Press, p. 40
  10. ^ Leonard Eugene Dickson (1914), Algebraic invariants, J. Wiley, p. 85, ISBN 978-1-4297-0042-9 {{citation}}: ISBN / Date incompatibility (help)
  11. ^ According to [Lothar von Wolfersdorf (2000), "Einige Klassen quadratischer Integralgleichungen", Sitzungsberichte der S?chsischen Akademie der Wissenschaften zu Leipzig, Mathematisch-naturwissenschaftliche Klasse, volume 128, number 2, 6–7], the source is Volterra, Vito (1913), "Le?ons sur les fonctions de linges". Gauthier-Villars, Paris 1913.
  12. ^ Damelin & Miller 2011, p. 219
  13. ^ Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1989). Numerical Recipes in Pascal. Cambridge University Press. p. 450. ISBN 0-521-37516-9.
  14. ^ Rader, C.M. (December 1972). "Discrete Convolutions via Mersenne Transforms". IEEE Transactions on Computers. 21 (12): 1269–1273. doi:10.1109/T-C.1972.223497. S2CID 1939809.
  15. ^ Winograd, Shmuel (January 1980). Arithmetic Complexity of Computations. Society for Industrial and Applied Mathematics. doi:10.1137/1.9781611970364. ISBN 978-0-89871-163-9.
  16. ^ Lyakhov, P. A.; Nagornov, N. N.; Semyonova, N. F.; Abdulsalyamova, A. S. (June 2023). "Reducing the Computational Complexity of Image Processing Using Wavelet Transform Based on the Winograd Method". Pattern Recognition and Image Analysis. 33 (2): 184–191. doi:10.1134/S1054661823020074. ISSN 1054-6618. S2CID 259310351.
  17. ^ Wu, Di; Fan, Xitian; Cao, Wei; Wang, Lingli (May 2021). "SWM: A High-Performance Sparse-Winograd Matrix Multiplication CNN Accelerator". IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 29 (5): 936–949. doi:10.1109/TVLSI.2021.3060041. ISSN 1063-8210. S2CID 233433757.
  18. ^ Mittal, Sparsh; Vibhu (May 2021). "A survey of accelerator architectures for 3D convolution neural networks". Journal of Systems Architecture. 115: 102041. doi:10.1016/j.sysarc.2021.102041. S2CID 233917781.
  19. ^ Selesnick, Ivan W.; Burrus, C. Sidney (1999). "Fast Convolution and Filtering". In Madisetti, Vijay K. (ed.). Digital Signal Processing Handbook. CRC Press. p. Section 8. ISBN 978-1-4200-4563-5.
  20. ^ Juang, B.H. "Lecture 21: Block Convolution" (PDF). EECS at the Georgia Institute of Technology. Archived (PDF) from the original on 2025-08-06. Retrieved 17 May 2013.
  21. ^ Gardner, William G. (November 1994). "Efficient Convolution without Input/Output Delay" (PDF). Audio Engineering Society Convention 97. Paper 3897. Archived (PDF) from the original on 2025-08-06. Retrieved 17 May 2013.
  22. ^ Beckner, William (1975). "Inequalities in Fourier analysis". Annals of Mathematics. Second Series. 102 (1): 159–182. doi:10.2307/1970980. JSTOR 1970980.
  23. ^ Brascamp, Herm Jan; Lieb, Elliott H. (1976). "Best constants in Young's inequality, its converse, and its generalization to more than three functions". Advances in Mathematics. 20 (2): 151–173. doi:10.1016/0001-8708(76)90184-5.
  24. ^ Reed & Simon 1975, IX.4
  25. ^ Weisstein, Eric W. "Convolution". mathworld.wolfram.com. Retrieved 2025-08-06.
  26. ^ Weisstein, Eric W. "From MathWorld--A Wolfram Web Resource".
  27. ^ Slyusar, V. I. (December 27, 1996). "End products in matrices in radar applications" (PDF). Radioelectronics and Communications Systems. 41 (3): 50–53. Archived (PDF) from the original on 2025-08-06.
  28. ^ Slyusar, V. I. (2025-08-06). "Analytical model of the digital antenna array on a basis of face-splitting matrix products" (PDF). Proc. ICATT-97, Kyiv: 108–109. Archived (PDF) from the original on 2025-08-06.
  29. ^ Slyusar, V. I. (2025-08-06). "New operations of matrices product for applications of radars" (PDF). Proc. Direct and Inverse Problems of Electromagnetic and Acoustic Wave Theory (DIPED-97), Lviv.: 73–74. Archived (PDF) from the original on 2025-08-06.
  30. ^ Slyusar, V. I. (March 13, 1998). "A Family of Face Products of Matrices and its Properties" (PDF). Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz.- 1999. 35 (3): 379–384. doi:10.1007/BF02733426. S2CID 119661450. Archived (PDF) from the original on 2025-08-06.
  31. ^ Slyusar, V. I. (2003). "Generalized face-products of matrices in models of digital antenna arrays with nonidentical channels" (PDF). Radioelectronics and Communications Systems. 46 (10): 9–17. Archived (PDF) from the original on 2025-08-06.
  32. ^ Ninh, Pham; Pagh, Rasmus (2013). Fast and scalable polynomial kernels via explicit feature maps. SIGKDD international conference on Knowledge discovery and data mining. Association for Computing Machinery. doi:10.1145/2487575.2487591.
  33. ^ Hewitt and Ross (1979) Abstract harmonic analysis, volume 1, second edition, Springer-Verlag, p 266.
  34. ^ Hewitt and Ross (1979), Theorem 19.18, p 272.
  35. ^ R. Tyrrell Rockafellar (1970), Convex analysis, Princeton University Press
  36. ^ Zhang, Yingjie; Soon, Hong Geok; Ye, Dongsen; Fuh, Jerry Ying Hsi; Zhu, Kunpeng (September 2020). "Powder-Bed Fusion Process Monitoring by Machine Vision With Hybrid Convolutional Neural Networks". IEEE Transactions on Industrial Informatics. 16 (9): 5769–5779. doi:10.1109/TII.2019.2956078. ISSN 1941-0050. S2CID 213010088.
  37. ^ Chervyakov, N.I.; Lyakhov, P.A.; Deryabin, M.A.; Nagornov, N.N.; Valueva, M.V.; Valuev, G.V. (September 2020). "Residue Number System-Based Solution for Reducing the Hardware Cost of a Convolutional Neural Network". Neurocomputing. 407: 439–453. doi:10.1016/j.neucom.2020.04.018. S2CID 219470398. Convolutional neural networks represent deep learning architectures that are currently used in a wide range of applications, including computer vision, speech recognition, time series analysis in finance, and many others.
  38. ^ Atlas, Homma, and Marks. "An Artificial Neural Network for Spatio-Temporal Bipolar Patterns: Application to Phoneme Classification" (PDF). Neural Information Processing Systems (NIPS 1987). 1. Archived (PDF) from the original on 2025-08-06.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  39. ^ Z?lzer, Udo, ed. (2002). DAFX:Digital Audio Effects, p.48–49. ISBN 0471490784.
  40. ^ Diggle 1985.
  41. ^ Ghasemi & Nowak 2017.
  42. ^ Monaghan, J. J. (1992). "Smoothed particle hydrodynamics". Annual Review of Astronomy and Astrophysics. 30: 543–547. Bibcode:1992ARA&A..30..543M. doi:10.1146/annurev.aa.30.090192.002551. Retrieved 16 February 2021.

Further reading

[edit]
[edit]
经常头晕是什么原因引起的 破是什么生肖 吃什么不便秘可以通便 寒衣节是什么意思 铂金是什么材质
epo是什么意思 经常扁桃体发炎是什么原因 吃什么可以缓解孕吐恶心 肛门痒挂什么科 丙肝病毒抗体阴性是什么意思
过敏性咳嗽吃什么药 牙齿根管治疗是什么意思 水瓶座是什么象星座 黄柏胶囊主要治什么病 什么平什么静
a02是什么牌子 女人的动物是什么生肖 瑜伽是什么运动 蜜蜡什么样的成色最好 血肿是什么意思
抬头头晕是什么原因hcv9jop6ns4r.cn 扑尔敏的学名叫什么wzqsfys.com 脾虚湿盛吃什么药hcv8jop7ns9r.cn 打白条是什么意思hcv9jop8ns2r.cn 中国信仰什么教hcv8jop9ns2r.cn
海星吃什么hcv7jop6ns5r.cn 腮腺炎用什么药hcv9jop0ns9r.cn 惜字如金是什么意思hcv7jop9ns0r.cn baumwolle是什么面料hcv8jop4ns2r.cn 肝内多发钙化灶是什么意思hcv9jop6ns3r.cn
奥运五环绿色代表什么hcv9jop2ns4r.cn 嘴唇起泡是什么火hcv9jop0ns0r.cn 血糖高一日三餐吃什么东西最适合hcv8jop9ns0r.cn 天麻加什么治头晕hcv8jop9ns6r.cn 精囊炎吃什么药最有效hcv8jop6ns6r.cn
85年属牛是什么命hcv9jop7ns9r.cn 高锰酸钾是什么hcv9jop4ns4r.cn 超声检查是什么hcv9jop0ns9r.cn 感统训练是什么hcv8jop4ns0r.cn 防风通圣颗粒治什么病hcv9jop4ns8r.cn
百度