尿渗透压低是什么原因| 孕妇能喝什么茶| ut是什么| cg是什么意思| rbc是什么意思| 气体交换受损与什么有关| 6月6是什么星座| 阿尼是什么意思| 平行班是什么意思| moda是什么牌子| 超敏c反应蛋白正常说明什么| 金乌是什么| 北京五行属什么| 射手座属于什么星象| 给产妇送什么礼物好| 结婚婚检都检查什么项目| 急性咽炎吃什么药| 饶舌是什么意思| 食指长痣代表什么| 高密度脂蛋白偏高是什么原因| rn是什么意思| 固执是什么意思| 什么是粗粮食物有哪些| 什么是共产主义社会| 车水马龙什么意思| 下午五点到七点是什么时辰| 孕妇应该吃什么蔬菜| 野猪怕什么颜色| 蚊子喜欢咬什么人| 口球是什么| 狗上皮过敏是什么意思| 睡意是什么意思| 粘米粉是什么粉| 96122是什么电话| 一什么帽子| 梦见火车脱轨什么预兆| 抽水是什么意思| 马标志的车是什么牌子| cc是什么| 雷公根有什么功效| 例假颜色发黑是什么原因| 寒碜是什么意思| 吃什么补孕酮| 肾在五行中属什么| 十月二十二什么星座| 什么是墨菲定律| 胆固醇偏高是什么原因| 什么人容易得天疱疮| 延时吃什么药| 释然是什么意思| 什么时候洗头是最佳时间| 梗塞灶是什么意思| 脂肪肝吃什么药好得快| 利福喷丁和利福平有什么区别| ys是什么意思| 什么药可以降尿酸| 虚岁29岁属什么生肖| u型枕有什么作用| 白醋洗脸有什么好处| 法老是什么意思| 六月出生的是什么星座| 狮子座什么性格| 烟雾病是什么| 纵隔子宫是什么意思| 辟谷是什么都不吃吗| r的平方是什么意思| 盆腔静脉石是什么意思| 什么罗之恋| 水煮鱼用什么鱼做好吃| 鸡代表什么数字| 眼睛睁不开是什么原因| 什么是创造性思维| 喉咙痰多是什么原因造成的| 幽门螺杆菌感染有什么症状| 用脚尖走路有什么好处| 阴道骚痒是什么原因| 缺钙会出现什么症状| 脾大是什么原因| 孕妇晚餐吃什么比较好| 报捕是什么意思| 三伏贴有什么功效| 5月什么星座| 腿脚肿胀是什么原因引起的| 突然戒烟对身体有什么影响| 人乳头瘤病毒56型阳性是什么意思| 阴道炎什么症状| 孩子晚上磨牙是什么原因| 虾仁不能和什么食物一起吃| 牛油果树长什么样| 肝右叶钙化灶是什么意思| arb是什么意思| 鱼疗是什么鱼| 滑膜炎吃什么药能治好| 转头头晕是什么原因| 青光眼是什么原因引起的| 二级以上医院是什么意思| 探望病人买什么水果| 宫颈炎是什么病| 经常感冒是什么原因| 尿肌酐低是什么原因| 什么叫飞机杯| 荷兰猪是什么动物| 为什么长不胖一直很瘦| edg是什么| 肝硬化挂什么科| 孕妇吃鹅蛋有什么好处| 太阳花是什么花| 为什么爱放屁| 肺结节挂什么科| 一月27日是什么星座| 天梭表什么档次| 芒硝有什么作用| 气虚的人适合什么运动| 跻身是什么意思| hdv是什么病毒| 宸字属于五行属什么| 什么是冤亲债主| 班门弄斧是什么意思| 8.8是什么星座| 张学友和张家辉什么关系| 威海有什么特产| 子宫内膜ca是什么意思| 慢性胃炎吃什么食物好| 仲夏夜是什么意思| 黑加京念什么| 鸡打瞌睡吃什么药| 虚岁27岁属什么生肖| 过期啤酒有什么用途| 及时是什么意思| 背部疼痛挂什么科| 晚上十二点是什么时辰| 窦性心律不齐是什么情况| 期许是什么意思| 孕囊小是什么原因| 清水文是什么意思| 梦见自己娶媳妇是什么意思| 小儿惊痫是什么症状| 喉咙痛挂什么科| 畏寒是什么意思| 异常子宫出血是什么原因| 梦到孩子被蛇咬是什么意思| o型血和b型血的孩子是什么血型| 吃完饭恶心想吐是什么原因| 鼻子上火是什么原因引起的| 招财猫是什么品种| 托付是什么意思| 手足口病是什么病| 中国的全称是什么| 1994年的狗是什么命| 胃凉是什么原因| 五月是什么生肖| 是的什么意思| 普拉提是什么意思| 苯磺酸氨氯地平片是什么药| 神经递质是什么意思| 溃疡吃什么水果| 经常性偏头疼是什么原因| 上海副市长什么级别| 为什么吃肉多反而瘦了| 心机血缺血吃什么药最好| 锁阳是什么| 什么是双| 五十知天命什么意思| 石千读什么| 脚拇指发麻是什么原因| 孕妇缺铁性贫血对胎儿有什么影响| 连续打喷嚏是什么原因| 腰疼去医院挂什么科| 吃葡萄干对身体有什么好处| a216是什么材质| 梦见自己假牙掉了是什么意思| 为什么叫五十肩| 媛交是什么意思| 小厨宝是什么东西| rr是什么意思| 白手起家是什么意思| 6月18日是什么节日| 6月18日什么星座| 背德感是什么意思| 拆线去医院挂什么科| 什么三迁| 嘴角烂了擦什么药| 爸爸的弟弟叫什么| 胆红素高吃什么食物能降得快| 7月22号是什么星座| 4月26是什么星座| 藏海花是什么花| 黄药是什么| 孕妇补铁吃什么| 鱿鱼不能和什么一起吃| 力挺是什么意思| 孕晚期为什么会脚肿| 血脂厚有什么症状| 应用化学是干什么的| 乳腺囊肿和乳腺结节有什么区别| 南京有什么山| 溃疡性结肠炎有什么症状| 吃毛蛋有什么好处| db是什么单位| 癃闭是什么意思| 慢脚是什么| 切屏是什么意思| 淋球菌培养是检查什么| 心里害怕紧张恐惧是什么症状| 女人为什么会出轨| 验光是什么意思| 看睾丸去医院挂什么科| 中之人什么意思| 西红柿什么时候吃最好| 缺铁性贫血吃什么药| 囟门是什么意思| yuri是什么意思| 炎细胞是什么意思| 噗什么意思| 黄色裤子配什么上衣| 雷特综合症是什么症状| 亲临是什么意思| 孕妇过敏可以用什么药| 功什么不什么| 血糖高什么东西不能吃| 好难过这不是我要的结果什么歌| 榴莲壳有什么用| 相形见拙什么意思| 摩羯座是什么动物| 青提是什么| 经常拉屎是什么原因| 一个至一个秦是什么字| 荣辱与共是什么意思| 后半夜咳嗽是什么原因| 娘家人是什么意思| 女性出汗多是什么原因| 照身份证穿什么颜色的衣服| 福禄寿是什么意思| 呕吐发烧是什么原因| 尿葡萄糖高是什么原因| 梦见小男孩拉屎是什么意思| 过敏喝什么药| 酒酿是什么| 胃胀吃什么药好| 食欲亢进是什么意思| 悉心栽培什么意思| 浅黄色是什么颜色| 朋友圈发女朋友照片配什么文字| 射手座是什么象| 疖肿用什么药膏| 姜字五行属什么| 小孩晚上睡觉发梦癫什么原因| 脱脂乳粉是什么| 甘油是什么成分| 梳子断了有什么预兆| 忌出行是什么意思| 康复治疗学主要学什么| 浑身发抖是什么原因| 25度穿什么衣服合适| 11月27日是什么星座| 血红蛋白偏高是什么意思| 三伏天喝什么汤最好| 粒细胞低是什么原因| 什么木做菜板最好| 鸡同鸭讲是什么意思| 胃酸反流是什么原因造成| 高危型hpv52阳性是什么意思| 结节影是什么意思| 法西斯战争是什么意思| 百度Jump to content

白百合向“第一狗仔队”宣战,多位明星一起诉卓伟

From Wikipedia, the free encyclopedia
百度 记者:党的十九大闭幕后,新疆是如何结合边疆民族地区特点,学习宣传贯彻党的十九大精神的?陈全国:我们把学习宣传贯彻党的十九大精神作为首要政治任务,牢牢把握习近平新时代中国特色社会主义思想这个灵魂,抓好学习培训,集中开展宣讲,精心组织宣传,全面贯彻落实,让党的十九大精神传遍天山南北。

In computer science, an instruction set architecture (ISA) is an abstract model that generally defines how software controls the CPU in a computer or a family of computers.[1] A device or program that executes instructions described by that ISA, such as a central processing unit (CPU), is called an implementation of that ISA.

In general, an ISA defines the supported instructions, data types, registers, the hardware support for managing main memory,[clarification needed] fundamental features (such as the memory consistency, addressing modes, virtual memory), and the input/output model of implementations of the ISA.

An ISA specifies the behavior of machine code running on implementations of that ISA in a fashion that does not depend on the characteristics of that implementation, providing binary compatibility between implementations. This enables multiple implementations of an ISA that differ in characteristics such as performance, physical size, and monetary cost (among other things), but that are capable of running the same machine code, so that a lower-performance, lower-cost machine can be replaced with a higher-cost, higher-performance machine without having to replace software. It also enables the evolution of the microarchitectures of the implementations of that ISA, so that a newer, higher-performance implementation of an ISA can run software that runs on previous generations of implementations.

If an operating system maintains a standard and compatible application binary interface (ABI) for a particular ISA, machine code will run on future implementations of that ISA and operating system. However, if an ISA supports running multiple operating systems, it does not guarantee that machine code for one operating system will run on another operating system, unless the first operating system supports running machine code built for the other operating system.

An ISA can be extended by adding instructions or other capabilities, or adding support for larger addresses and data values; an implementation of the extended ISA will still be able to execute machine code for versions of the ISA without those extensions. Machine code using those extensions will only run on implementations that support those extensions.

The binary compatibility that they provide makes ISAs one of the most fundamental abstractions in computing.

Overview

[edit]

An instruction set architecture is distinguished from a microarchitecture, which is the set of processor design techniques used, in a particular processor, to implement the instruction set. Processors with different microarchitectures can share a common instruction set. For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but they have radically different internal designs.

The concept of an architecture, distinct from the design of a specific machine, was developed by Fred Brooks at IBM during the design phase of System/360.

Prior to NPL [System/360], the company's computer designers had been free to honor cost objectives not only by selecting technologies but also by fashioning functional and architectural refinements. The SPREAD compatibility objective, in contrast, postulated a single architecture for a series of five processors spanning a wide range of cost and performance. None of the five engineering design teams could count on being able to bring about adjustments in architectural specifications as a way of easing difficulties in achieving cost and performance objectives.[2]:?p.137?

Some virtual machines that support bytecode as their ISA such as Smalltalk, the Java virtual machine, and Microsoft's Common Language Runtime, implement this by translating the bytecode for commonly used code paths into native machine code. In addition, these virtual machines execute less frequently used code paths by interpretation (see: Just-in-time compilation). Transmeta implemented the x86 instruction set atop very long instruction word (VLIW) processors in this fashion.

Classification of ISAs

[edit]

An ISA may be classified in a number of different ways. A common classification is by architectural complexity. A complex instruction set computer (CISC) has many specialized instructions, some of which may only be rarely used in practical programs. A reduced instruction set computer (RISC) simplifies the processor by efficiently implementing only the instructions that are frequently used in programs, while the less common operations are implemented as subroutines, having their resulting additional processor execution time offset by infrequent use.[3]

Other types include VLIW architectures, and the closely related long instruction word (LIW)[citation needed] and explicitly parallel instruction computing (EPIC) architectures. These architectures seek to exploit instruction-level parallelism with less hardware than RISC and CISC by making the compiler responsible for instruction issue and scheduling.[4]

Architectures with even less complexity have been studied, such as the minimal instruction set computer (MISC) and one-instruction set computer (OISC). These are theoretically important types, but have not been commercialized.[5][6]

Instructions

[edit]

Machine language is built up from discrete statements or instructions. On the processing architecture, a given instruction may specify:

  • opcode (the instruction to be performed) e.g. add, copy, test
  • any explicit operands:
registers
literal/constant values
addressing modes used to access memory

More complex operations are built up by combining these simple instructions, which are executed sequentially, or as otherwise directed by control flow instructions.

Instruction types

[edit]

Examples of operations common to many instruction sets include:

Data handling and memory operations

[edit]
  • Set a register to a fixed constant value.
  • Copy data from a memory location or a register to a memory location or a register (a machine instruction is often called move; however, the term is misleading). They are used to store the contents of a register, the contents of another memory location or the result of a computation, or to retrieve stored data to perform a computation on it later. They are often called load or store operations.
  • Read or write data from hardware devices.

Arithmetic and logic operations

[edit]
  • Add, subtract, multiply, or divide the values of two registers, placing the result in a register, possibly setting one or more condition codes in a status register.[7]
    • increment, decrement in some ISAs, saving operand fetch in trivial cases.
  • Perform bitwise operations, e.g., taking the conjunction and disjunction of corresponding bits in a pair of registers, taking the negation of each bit in a register.
  • Compare two values in registers (for example, to see if one is less, or if they are equal).
  • Floating-point instructions for arithmetic on floating-point numbers.[7]

Control flow operations

[edit]
  • Branch to another location in the program and execute instructions there.
  • Conditionally branch to another location if a certain condition holds.
  • Indirectly branch to another location.
  • Skip one of more instructions, depending on conditions
  • Trap Explicitly cause an interrupt, either conditionally or unconditionally.
  • Call another block of code, while saving, e.g., the location of the next instruction, as a point to return to.

Coprocessor instructions

[edit]
  • Load/store data to and from a coprocessor or exchanging with CPU registers.
  • Perform coprocessor operations.

Complex instructions

[edit]

Processors may include "complex" instructions in their instruction set. A single "complex" instruction does something that may take many instructions on other computers. Such instructions are typified by instructions that take multiple steps, control multiple functional units, or otherwise appear on a larger scale than the bulk of simple instructions implemented by the given processor. Some examples of "complex" instructions include:

Complex instructions are more common in CISC instruction sets than in RISC instruction sets, but RISC instruction sets may include them as well. RISC instruction sets generally do not include ALU operations with memory operands, or instructions to move large blocks of memory, but most RISC instruction sets include SIMD or vector instructions that perform the same arithmetic operation on multiple pieces of data at the same time. SIMD instructions have the ability of manipulating large vectors and matrices in minimal time. SIMD instructions allow easy parallelization of algorithms commonly involved in sound, image, and video processing. Various SIMD implementations have been brought to market under trade names such as MMX, 3DNow!, and AltiVec.

Instruction encoding

[edit]
One instruction may have several fields, which identify the logical operation, and may also include source and destination addresses and constant values. This is the MIPS "Add Immediate" instruction, which allows selection of source and destination registers and inclusion of a small constant.

On traditional architectures, an instruction includes an opcode that specifies the operation to perform, such as add contents of memory to register—and zero or more operand specifiers, which may specify registers, memory locations, or literal data. The operand specifiers may have addressing modes determining their meaning or may be in fixed fields. In very long instruction word (VLIW) architectures, which include many microcode architectures, multiple simultaneous opcodes and operands are specified in a single instruction.

Some exotic instruction sets do not have an opcode field, such as transport triggered architectures (TTA), only operand(s).

Most stack machines have "0-operand" instruction sets in which arithmetic and logical operations lack any operand specifier fields; only instructions that push operands onto the evaluation stack or that pop operands from the stack into variables have operand specifiers. The instruction set carries out most ALU actions with postfix (reverse Polish notation) operations that work only on the expression stack, not on data registers or arbitrary main memory cells. This can be very convenient for compiling high-level languages, because most arithmetic expressions can be easily translated into postfix notation.[8]

Conditional instructions often have a predicate field—a few bits that encode the specific condition to cause an operation to be performed rather than not performed. For example, a conditional branch instruction will transfer control if the condition is true, so that execution proceeds to a different part of the program, and not transfer control if the condition is false, so that execution continues sequentially. Some instruction sets also have conditional moves, so that the move will be executed, and the data stored in the target location, if the condition is true, and not executed, and the target location not modified, if the condition is false. Similarly, IBM z/Architecture has a conditional store instruction. A few instruction sets include a predicate field in every instruction. Having predicates for non-branch instructions is called predication.

Number of operands

[edit]

Instruction sets may be categorized by the maximum number of operands explicitly specified in instructions.

(In the examples that follow, a, b, and c are (direct or calculated) addresses referring to memory cells, while reg1 and so on refer to machine registers.)

C = A+B
  • 0-operand (zero-address machines), so called stack machines: All arithmetic operations take place using the top one or two positions on the stack:[9] push a, push b, add, pop c.
    • C = A+B needs four instructions.[10] For stack machines, the terms "0-operand" and "zero-address" apply to arithmetic instructions, but not to all instructions, as 1-operand push and pop instructions are used to access memory.
  • 1-operand (one-address machines), so called accumulator machines, include early computers and many small microcontrollers: most instructions specify a single right operand (that is, constant, a register, or a memory location), with the implicit accumulator as the left operand (and the destination if there is one): load a, add b, store c.
    • C = A+B needs three instructions.[10]
  • 2-operand — many CISC and RISC machines fall under this category:
    • CISC — move A to C; then add B to C.
      • C = A+B needs two instructions. This effectively 'stores' the result without an explicit store instruction.
    • CISC — Often machines are limited to one memory operand per instruction: load a,reg1; add b,reg1; store reg1,c; This requires a load/store pair for any memory movement regardless of whether the add result is an augmentation stored to a different place, as in C = A+B, or the same memory location: A = A+B.
      • C = A+B needs three instructions.
    • RISC — Requiring explicit memory loads, the instructions would be: load a,reg1; load b,reg2; add reg1,reg2; store reg2,c.
      • C = A+B needs four instructions.
  • 3-operand, allowing better reuse of data:[11]
    • CISC — It becomes either a single instruction: add a,b,c
      • C = A+B needs one instruction.
    • CISC — Or, on machines limited to two memory operands per instruction, move a,reg1; add reg1,b,c;
      • C = A+B needs two instructions.
    • RISC — arithmetic instructions use registers only, so explicit 2-operand load/store instructions are needed: load a,reg1; load b,reg2; add reg1+reg2->reg3; store reg3,c;
      • C = A+B needs four instructions.
      • Unlike 2-operand or 1-operand, this leaves all three values a, b, and c in registers available for further reuse.[11]
  • more operands—some CISC machines permit a variety of addressing modes that allow more than 3 operands (registers or memory accesses), such as the VAX "POLY" polynomial evaluation instruction.

Due to the large number of bits needed to encode the three registers of a 3-operand instruction, RISC architectures that have 16-bit instructions are invariably 2-operand designs, such as the Atmel AVR, TI MSP430, and some versions of ARM Thumb. RISC architectures that have 32-bit instructions are usually 3-operand designs, such as the ARM, AVR32, MIPS, Power ISA, and SPARC architectures.

Each instruction specifies some number of operands (registers, memory locations, or immediate values) explicitly. Some instructions give one or both operands implicitly, such as by being stored on top of the stack or in an implicit register. If some of the operands are given implicitly, fewer operands need be specified in the instruction. When a "destination operand" explicitly specifies the destination, an additional operand must be supplied. Consequently, the number of operands encoded in an instruction may differ from the mathematically necessary number of arguments for a logical or arithmetic operation (the arity). Operands are either encoded in the "opcode" representation of the instruction, or else are given as values or addresses following the opcode.

Register pressure

[edit]

Register pressure measures the availability of free registers at any point in time during the program execution. Register pressure is high when a large number of the available registers are in use; thus, the higher the register pressure, the more often the register contents must be spilled into memory. Increasing the number of registers in an architecture decreases register pressure but increases the cost.[12]

While embedded instruction sets such as Thumb suffer from extremely high register pressure because they have small register sets, general-purpose RISC ISAs like MIPS and Alpha enjoy low register pressure. CISC ISAs like x86-64 offer low register pressure despite having smaller register sets. This is due to the many addressing modes and optimizations (such as sub-register addressing, memory operands in ALU instructions, absolute addressing, PC-relative addressing, and register-to-register spills) that CISC ISAs offer.[13]

Instruction length

[edit]

The size or length of an instruction varies widely, from as little as four bits in some microcontrollers to many hundreds of bits in some VLIW systems. Processors used in personal computers, mainframes, and supercomputers have minimum instruction sizes between 8 and 64 bits. The longest possible instruction on x86 is 15 bytes (120 bits).[14] Within an instruction set, different instructions may have different lengths. In some architectures, notably most reduced instruction set computers (RISC), instructions are a fixed length, typically corresponding with that architecture's word size. In other architectures, instructions have variable length, typically integral multiples of a byte or a halfword. Some, such as the ARM with Thumb-extension have mixed variable encoding, that is two fixed, usually 32-bit and 16-bit encodings, where instructions cannot be mixed freely but must be switched between on a branch (or exception boundary in ARMv8).

Fixed-length instructions are less complicated to handle than variable-length instructions for several reasons (not having to check whether an instruction straddles a cache line or virtual memory page boundary,[11] for instance), and are therefore somewhat easier to optimize for speed.

Code density

[edit]

In early 1960s computers, main memory was expensive and very limited, even on mainframes. Minimizing the size of a program to make sure it would fit in the limited memory was often central. Thus the size of the instructions needed to perform a particular task, the code density, was an important characteristic of any instruction set. It remained important on the initially-tiny memories of minicomputers and then microprocessors. Density remains important today, for smartphone applications, applications downloaded into browsers over slow Internet connections, and in ROMs for embedded applications. A more general advantage of increased density is improved effectiveness of caches and instruction prefetch.

Computers with high code density often have complex instructions for procedure entry, parameterized returns, loops, etc. (therefore retroactively named Complex Instruction Set Computers, CISC). However, more typical, or frequent, "CISC" instructions merely combine a basic ALU operation, such as "add", with the access of one or more operands in memory (using addressing modes such as direct, indirect, indexed, etc.). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment, etc. Software-implemented instruction sets may have even more complex and powerful instructions.

Reduced instruction-set computers, RISC, were first widely implemented during a period of rapidly growing memory subsystems. They sacrifice code density to simplify implementation circuitry, and try to increase performance via higher clock frequencies and more registers. A single RISC instruction typically performs only a single operation, such as an "add" of registers or a "load" from a memory location into a register. A RISC instruction set normally has a fixed instruction length, whereas a typical CISC instruction set has instructions of widely varying length. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories.

Certain embedded RISC ISAs like Thumb and AVR32 typically exhibit very high density owing to a technique called code compression. This technique packs two 16-bit instructions into one 32-bit word, which is then unpacked at the decode stage and executed as two instructions.[15]

Minimal instruction set computers (MISC) are commonly a form of stack machine, where there are few separate instructions (8–32), so that multiple instructions can be fit into a single machine word. These types of cores often take little silicon to implement, so they can be easily realized in an FPGA (field-programmable gate array) or in a multi-core form. The code density of MISC is similar to the code density of RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task.[16][failed verification]

There has been research into executable compression as a mechanism for improving code density. The mathematics of Kolmogorov complexity describes the challenges and limits of this.

In practice, code density is also dependent on the compiler. Most optimizing compilers have options that control whether to optimize code generation for execution speed or for code density. For instance GCC has the option -Os to optimize for small machine code size, and -O3 to optimize for execution speed at the cost of larger machine code.

Representation

[edit]

The instructions constituting a program are rarely specified using their internal, numeric form (machine code); they may be specified by programmers using an assembly language or, more commonly, may be generated from high-level programming languages by compilers.[17]

Design

[edit]

The design of instruction sets is a complex issue. There were two stages in history for the microprocessor. The first was the CISC (complex instruction set computer), which had many different instructions. In the 1970s, however, places like IBM did research and found that many instructions in the set could be eliminated. The result was the RISC (reduced instruction set computer), an architecture that uses a smaller set of instructions. A simpler instruction set may offer the potential for higher speeds, reduced processor size, and reduced power consumption. However, a more complex set may optimize common operations, improve memory and cache efficiency, or simplify programming.

Some instruction set designers reserve one or more opcodes for some kind of system call or software interrupt. For example, MOS Technology 6502 uses 00H, Zilog Z80 uses the eight codes C7,CF,D7,DF,E7,EF,F7,FFH[18] while Motorola 68000 use codes in the range 4E40H-4E4FH.[19]

Fast virtual machines are much easier to implement if an instruction set meets the Popek and Goldberg virtualization requirements.[clarification needed]

The NOP slide used in immunity-aware programming is much easier to implement if the "unprogrammed" state of the memory is interpreted as a NOP.[dubiousdiscuss]

On systems with multiple processors, non-blocking synchronization algorithms are much easier to implement[citation needed] if the instruction set includes support for something such as "fetch-and-add", "load-link/store-conditional" (LL/SC), or "atomic compare-and-swap".

Instruction set implementation

[edit]

A given instruction set can be implemented in a variety of ways. All ways of implementing a particular instruction set provide the same programming model, and all implementations of that instruction set are able to run the same executables. The various ways of implementing an instruction set give different tradeoffs between cost, performance, power consumption, size, etc.

When designing the microarchitecture of a processor, engineers use blocks of "hard-wired" electronic circuitry (often designed separately) such as adders, multiplexers, counters, registers, ALUs, etc. Some kind of register transfer language is then often used to describe the decoding and sequencing of each instruction of an ISA using this physical microarchitecture. There are two basic ways to build a control unit to implement this description (although many designs use middle ways or compromises):

  1. Some computer designs "hardwire" the complete instruction set decoding and sequencing (just like the rest of the microarchitecture).
  2. Other designs employ microcode routines or tables (or both) to do this, using ROMs or writable RAMs (writable control store), PLAs, or both.

Some microcoded CPU designs with a writable control store use it to allow the instruction set to be changed (for example, the Rekursiv processor and the Imsys Cjip).[20]

CPUs designed for reconfigurable computing may use field-programmable gate arrays (FPGAs).

An ISA can also be emulated in software by an interpreter. Naturally, due to the interpretation overhead, this is slower than directly running programs on the emulated hardware, unless the hardware running the emulator is an order of magnitude faster. Today, it is common practice for vendors of new ISAs or microarchitectures to make software emulators available to software developers before the hardware implementation is ready.

Often the details of the implementation have a strong influence on the particular instructions selected for the instruction set. For example, many implementations of the instruction pipeline only allow a single memory load or memory store per instruction, leading to a load–store architecture (RISC). For another example, some early ways of implementing the instruction pipeline led to a delay slot.

The demands of high-speed digital signal processing have pushed in the opposite direction—forcing instructions to be implemented in a particular way. For example, to perform digital filters fast enough, the MAC instruction in a typical digital signal processor (DSP) must use a kind of Harvard architecture that can fetch an instruction and two data words simultaneously, and it requires a single-cycle multiply–accumulate multiplier.

See also

[edit]

References

[edit]
  1. ^ "GLOSSARY: Instruction Set Architecture (ISA)". arm.com. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  2. ^ Pugh, Emerson W.; Johnson, Lyle R.; Palmer, John H. (1991). IBM's 360 and Early 370 Systems. MIT Press. ISBN 0-262-16123-0.
  3. ^ Chen, Crystal; Novick, Greg; Shimano, Kirk (December 16, 2006). "RISC Architecture: RISC vs. CISC". cs.stanford.edu. Archived from the original on February 21, 2015. Retrieved February 21, 2015.
  4. ^ Schlansker, Michael S.; Rau, B. Ramakrishna (February 2000). "EPIC: Explicitly Parallel Instruction Computing". Computer. 33 (2): 37–45. doi:10.1109/2.820037.
  5. ^ Shaout, Adnan; Eldos, Taisir (Summer 2003). "On the Classification of Computer Architecture". International Journal of Science and Technology. 14: 3. Retrieved March 2, 2023.
  6. ^ Gilreath, William F.; Laplante, Phillip A. (December 6, 2012). Computer Architecture: A Minimalist Perspective. Springer Science+Business Media. ISBN 978-1-4615-0237-1.
  7. ^ a b Hennessy & Patterson 2003, p. 108.
  8. ^ Durand, Paul. "Instruction Set Architecture (ISA)". Introduction to Computer Science CS 0.
  9. ^ Hennessy & Patterson 2003, p. 92.
  10. ^ a b Hennessy & Patterson 2003, p. 93.
  11. ^ a b c Cocke, John; Markstein, Victoria (January 1990). "The evolution of RISC technology at IBM" (PDF). IBM Journal of Research and Development. 34 (1): 4–11. doi:10.1147/rd.341.0004. Retrieved 2025-08-07.
  12. ^ Page, Daniel (2009). "11. Compilers". A Practical Introduction to Computer Architecture. Springer. p. 464. Bibcode:2009pica.book.....P. ISBN 978-1-84882-255-9.
  13. ^ Venkat, Ashish; Tullsen, Dean M. (2014). Harnessing ISA Diversity: Design of a Heterogeneous-ISA Chip Multiprocessor. 41st Annual International Symposium on Computer Architecture.
  14. ^ "Intel? 64 and IA-32 Architectures Software Developer's Manual". Intel Corporation. Retrieved 5 October 2022.
  15. ^ Weaver, Vincent M.; McKee, Sally A. (2009). Code density concerns for new architectures. IEEE International Conference on Computer Design. CiteSeerX 10.1.1.398.1967. doi:10.1109/ICCD.2009.5413117.
  16. ^ "RISC vs. CISC". cs.stanford.edu. Retrieved 2025-08-07.
  17. ^ Hennessy & Patterson 2003, p. 120.
  18. ^ Ganssle, Jack (February 26, 2001). "Proactive Debugging". embedded.com.
  19. ^ M68000 8-/16-/32-Bit Microprocessors User’s Manual (9 ed.). TRAP: Motorola. 1993. p. 4-188.
  20. ^ "Great Microprocessors of the Past and Present (V 13.4.0)". cpushack.net. Retrieved 2025-08-07.

Further reading

[edit]
[edit]
什么是马甲线 康熙的儿子叫什么 蜈蚣长什么样子 胆囊壁胆固醇结晶是什么意思 胰岛素是什么意思
一个金字旁一个川读什么 心窝窝疼是什么原因 shark是什么意思 尿频尿急挂什么科 什么是星座
社康是什么 mc是什么意思 长生殿讲的是什么故事 复合维生素b什么时候吃最好 什么方法可以快速排便
ck香水属于什么档次 家里进蛇有什么预兆 被蚂蚁咬了涂什么药 涵字属于五行属什么 11月25是什么星座
阴虱什么症状hcv9jop3ns3r.cn 女命七杀代表什么hcv9jop0ns1r.cn 科伦是什么药hcv8jop2ns3r.cn 什么人不能念阿弥陀佛hcv9jop2ns1r.cn 正常的心电图是什么样的图形yanzhenzixun.com
吃洋葱对身体有什么好处hcv7jop5ns1r.cn 拔罐红色是什么原因hcv9jop3ns6r.cn 乙基麦芽酚是什么weuuu.com 甲状腺跟甲亢有什么区别hcv9jop4ns5r.cn 双侧胸膜局限性增厚是什么意思hcv8jop6ns0r.cn
4月15日是什么日子hcv9jop0ns2r.cn 五行属金什么字最好beikeqingting.com 为什么喝咖啡会心慌hcv8jop9ns4r.cn 小便粉红色是什么原因baiqunet.com 遗忘的遗是什么意思bfb118.com
北京什么时候最热hcv8jop5ns0r.cn 过敏忌口不能吃什么baiqunet.com 白佛言是什么意思hlguo.com 为什么白天尿少晚上尿多hcv8jop7ns5r.cn gg了是什么意思fenrenren.com
百度