感觉有痰咳不出来是什么原因| 名节是什么意思| abob是什么药| 冬虫夏草什么价格| 嘴角上方有痣代表什么| 宫禁糜烂用什么药| 6月17日是什么星座| 肠子粘连有什么办法解决| 什么是手机号| 深圳少年宫有什么好玩的| 想吃肉是身体缺什么| 女性喝什么茶最好| 绿色加红色是什么颜色| 寒疾现代叫什么病| 朱砂是什么东西| 血糖高吃什么水果降糖| 范仲淹是什么朝代的| 什么是思想| 心脑供血不足吃什么药效果最好| 常务副县长什么级别| 经常晕倒是什么原因引起的| 胃病有什么症状| 文房四宝指的是什么| 女生流白带意味着什么| 血儿茶酚胺是查什么的| 女人吃什么| 血脂高不能吃什么食物| 十万为什么| 男性生殖痒是什么原因| 窦性心律什么意思| 龋齿是什么样子的图片| 胎盘粘连是什么原因造成的| 吃什么会胖| 做梦梦到蛇是什么征兆| 痔疮不能吃什么东西| 开团什么意思| 植树造林的好处是什么| bmi是什么意思| 循环系统包括什么| 薛之谦属什么生肖| 梦见吃蜂蜜是什么预兆| 上火吃什么消炎药| 咽干是什么原因造成的| 孔雀女是什么意思| 干咳无痰是什么原因引起的| 耳朵响吃什么药| 女人左下腹部疼痛什么原因| 阴阴阳是什么卦| 长期肚子疼是什么原因| 经期肚子疼是什么原因| 碱面是什么| 亚硝酸钠是什么东西| 胳膊上的肌肉叫什么| 孕妇甲减是什么原因引起的| 扬字五行属什么| 养什么鱼招财转运| 2003是什么年| 布洛芬过量有什么危害| 鹅蛋吃了有什么好处| 夜尿频多吃什么药效果好| 右手无名指戴戒指是什么意思| 女人什么时候是排卵期| 什么是纳氏囊肿| 草字头一个辛读什么| 本座是什么意思| 什么是日间手术| 弯弯的什么| 被马蜂蛰了用什么药| 检查生育能力挂什么科| 乳清粉是什么| 抖m是什么意思| 一什么绿毯| 叻叻猪是什么意思| 4月28号是什么星座| 刺五加配什么药治失眠| 颈部淋巴结肿大是什么原因| 6.21什么星座| 什么食物含碘高| 血压压差小是什么原因| 什么是tct检查| 抖机灵是什么意思| 雾化器是干什么用的| 第一次做什么感觉| 什么是圆房| 扁桃体结石挂什么科| 气色是什么意思| 小便很黄是什么原因| lady是什么意思| 人心隔肚皮什么意思| rh血型阴性是什么意思| 感冒什么症状| 我适合什么发型| 足底筋膜炎挂什么科| 上焦湿热吃什么中成药| 为什么小脑会萎缩| 斯里兰卡属于什么国家| 什么叫单亲家庭| tspot检查阳性能说明什么| 7月15是什么星座| 六点半是什么时辰| 03年属什么| 邦顿手表是什么档次| 庆五行属什么| 泡脚时间长了有什么坏处| 什么茶提神| 钠低会出现什么症状| 低钠有什么症状和危害| 肺结节吃什么水果好| 按摩手推是什么意思| b超能检查出什么| 4月2号是什么星座| 纵隔肿瘤是什么病| 精神什么意思| 肺火吃什么药| 梅毒阳性是什么意思| 原始鳞状上皮成熟是什么意思| 焦油是什么| 小肚子左边疼是什么原因| 五经指什么| 判缓刑是什么意思| 翘嘴鱼吃什么食物| 不明原因发烧挂什么科| 吃什么健脾胃| 拉屎很臭是什么原因| 什么是腺样体面容| 日本人什么时候投降的| 靶器官是什么意思| php是什么意思| 阁老相当于现在什么官| 吃什么瘦肚子脂肪最快| 口腔溃疡挂什么科就诊| 荨麻疹为什么晚上起| 本科是什么学历| 创伤急救的原则是什么| 金青什么字| 兔子可以吃什么水果| 哺乳期感冒吃什么药不影响哺乳| 久卧伤气是什么意思| 什么是血铅| 荷兰豆为什么叫荷兰豆| 腰椎间盘突出吃什么药好| 同房出血是什么原因造成的| 9月是什么星座的| 为什么很多人不去庐山| 姐姐的小孩叫什么| 什么是pv| 什么是认知| 颈部出汗是什么原因| prn医学上是什么意思| 3月10号什么星座| 东海龙王叫什么名字| 什么榴莲品种最好吃| 智五行属什么| r的平方是什么意思| 天庭的动物是什么生肖| 气虚是什么原因造成的| 血小板压积是什么意思| 丙五行属什么| 十二月八号是什么星座| vfu是什么牌子| 吃什么卵泡长得快又好| 男人下巴有痣代表什么| 垂体是什么| 胃酸吃什么药好| 什么是平年什么是闰年| 喝什么可以解酒| 气喘吁吁什么意思| 超霸是什么意思| 情商是什么| lop胎位是什么意思| 乙肝两对半145阳性是什么意思| 秦昊的父母是干什么的| 愿闻其详什么意思| 红月亮是什么兆头| 须菩提是什么意思| 一级甲等医院是什么意思| 口若什么什么| 轻度溶血是什么意思| 呕吐吃什么药| 看痔疮挂什么科| 血管堵塞吃什么好| 淋巴结在什么位置| 什么炎炎| 容易饿是什么原因| 为什么外阴老是长疖子| 酒精和碘伏有什么区别| 内蒙有什么特产| 纪是什么意思| 8月6日什么星座| 水洗标是什么| 长时间憋尿会有什么影响| 天上的星星是什么| 什么面条好吃| 日本人为什么长寿| 背部长痘痘是什么原因造成| 五月三十一号是什么星座| 翅膀车标是什么车| 菖蒲是什么| 斜杠青年什么意思| 宫颈光滑说明什么| 高血压药什么时候吃最好| 气垫是什么| 心理疾病吃什么药| 9.3是什么日子| 晚上吃什么水果减肥效果最好| 梦到涨大水预示着什么| 头发有点黄是什么原因| ntr是什么意思啊| 7月8日什么星座| 什么叫一个周期| 挑眉是什么意思| 王加几念什么| 深喉是什么意思| 明天是什么生肖| 女性喝什么茶最好| 体内湿气重用什么药| 清朝皇帝姓什么| 什么值得买| 术后吃什么刀口恢复得快| 面黄肌瘦是什么意思| 猫来家门口有什么预兆| 政府是干什么的| 梦到自己的妈妈死了是什么意思| 淋巴细胞高是什么意思| 双肾囊肿有什么危害| 舌苔白厚有齿痕是什么原因| 牙龈痛什么原因| 孩子手抖是什么原因| 脑梗吃什么药效果好| 血钙是什么意思| 痤疮是什么意思| 补肾壮阳吃什么效果好| 儿童肠胃感冒吃什么药效果好| 血吸虫是什么动物| 耳石症什么症状| 胃胀消化不好吃什么药| 现在买什么股票好| 4月21日什么星座| 鼻子无故出血什么原因| 友女是什么意思| 吉可以加什么偏旁| 中水是什么水| 检查是否怀孕要做什么检查| 月经期体重增加是什么原因| 突然停经是什么原因| 右佐匹克隆是什么药| 狗狗狂犬疫苗什么时候打| 1944年属什么生肖| 劝君更尽一杯酒的下一句是什么| 梦见出国了是什么意思| 犯花痴什么意思| 一什么桃子| 女性解脲支原体阳性吃什么药| 热水器什么牌子好| 报喜鸟属于什么档次| 霉菌性阴道炎用什么药好| 来月经喝酒有什么影响| 阴道感染用什么药| d表示什么| 深水炸弹什么意思| 为什么腿老是抽筋| 煎饼卷什么菜好吃| 切除一侧输卵管对女性有什么影响| 脑供血不足什么原因| 百度Jump to content

今年MWC展会上,高通在5G技术上有哪些亮点?

From Wikipedia, the free encyclopedia
(Redirected from Computer clustering)
Technicians working on a large Linux cluster at the Chemnitz University of Technology, Germany
Sun Microsystems Solaris Cluster, with In-Row cooling
Taiwania series uses cluster architecture.
百度 通过专题座谈会,形成对中共中央的建议,以及形成社情民意信息向中央进行反映。

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing.

The components of a cluster are usually connected to each other through fast local area networks, with each node (computer used as a server) running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware[1][better source needed] and the same operating system, although in some setups (e.g. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on each computer, or different hardware.[2]

Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.[3]

Computer clusters emerged as a result of the convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performance distributed computing.[citation needed] They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia.[4] Prior to the advent of clusters, single-unit fault tolerant mainframes with modular redundancy were employed; but the lower upfront cost of clusters, and increased speed of network fabric has favoured the adoption of clusters. In contrast to high-reliability mainframes, clusters are cheaper to scale out, but also have increased complexity in error handling, as in clusters error modes are not opaque to running programs.[5]

Basic concepts

[edit]
A simple, home-built Beowulf cluster

The desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations.

The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast local area network.[6] The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept.[6]

Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer-to-peer or grid computing which also use many nodes, but with a far more distributed nature.[6]

A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high-performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer.[7] The developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a relatively low cost.[8]

Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. The TOP500 organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture.[9]

History

[edit]
A VAX 11/780, c. 1977, as used in early VAXcluster development

Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup.[10] Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law.

The history of early computer clusters is more or less directly tied to the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster.

The first production system designed as a cluster was the Burroughs B5700 in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation.

Tandem NonStop II circa 1980

The first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" (ARC) system, developed in 1977, and using ARCnet as the cluster interface. Clustering per se did not really take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VMS operating system. The ARC and VAXcluster products not only supported parallel computing, but also shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem NonStop (a 1976 high-availability commercial product)[11][12] and the IBM S/390 Parallel Sysplex (circa 1994, primarily for business use).

Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer. Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, and introduced internal parallelism via vector processing.[13] While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K computer) relied on cluster architectures.

Attributes of clusters

[edit]
A load balancing cluster with two servers and N user stations

Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use a high-availability approach. Note that the attributes described below are not exclusive and a "computer cluster" may also use a high-availability approach, etc.

"Load-balancing" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized.[14] However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simple round-robin method by assigning each new request to a different node.[14]

Computer clusters are used for computation-intensive purposes, rather than handling IO-oriented operations such as web service or databases.[15] For instance, a computer cluster might support computational simulations of vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach "supercomputing".

"High-availability clusters" (also known as failover clusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundant nodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure. There are commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux operating system.

Benefits

[edit]

Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance (the ability of a system to continue operating despite a malfunctioning node) enables scalability, and in high-performance situations, allows for a low frequency of maintenance routines, resource consolidation (e.g., RAID), and centralized management. Advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity.[16][17]

In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up a single node in the cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers.

When adding a new node to a cluster, reliability increases because the entire cluster does not need to be taken down. A single node can be taken down for maintenance, while the rest of the cluster takes on the load of that individual node.

If you have a large number of computers clustered together, this lends itself to the use of distributed file systems and RAID, both of which can increase the reliability and speed of a cluster.

Design and configuration

[edit]
A typical Beowulf configuration

One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network, is densely located, and probably has homogeneous nodes. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approaching grid computing.

In a Beowulf cluster, the application programs never see the computational nodes (also called slave computers) but only interact with the "Master" which is a specific computer handling the scheduling and management of the slaves.[15] In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization.[15] The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have a large and shared file server that stores global persistent data, accessed by the slaves as needed.[15]

A special purpose 144-node DEGIMA cluster is tuned to running astrophysical N-body simulations using the Multiple-Walk parallel tree code, rather than general purpose scientific computations.[18]

Due to the increasing computing power of each generation of game consoles, a novel use has emerged where they are repurposed into High-performance computing (HPC) clusters. Some examples of game console clusters are Sony PlayStation clusters and Microsoft Xbox clusters. Another example of consumer game product is the Nvidia Tesla Personal Supercomputer workstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost).[2]

Computer clusters have historically run on separate physical computers with the same operating system. With the advent of virtualization, the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar.[19][citation needed][clarification needed] The cluster may also be virtualized on various configurations as maintenance takes place; an example implementation is Xen as the virtualization manager with Linux-HA.[19]

Data sharing and communication

[edit]

Data sharing

[edit]
A NEC Nehalem cluster

As the computer clusters were appearing during the 1980s, so were supercomputers. One of the elements that distinguished the three classes at that time was that the early supercomputers relied on shared memory. Clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it.

However, the use of a clustered file system is essential in modern computer clusters.[citation needed] Examples include the IBM General Parallel File System, Microsoft's Cluster Shared Volumes or the Oracle Cluster File System.

Message passing and communication

[edit]

Two widely used approaches for communication between cluster nodes are MPI (Message Passing Interface) and PVM (Parallel Virtual Machine).[20]

PVM was developed at the Oak Ridge National Laboratory around 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides a run-time environment for message-passing, task and resource management, and fault notification. PVM can be used by user programs written in C, C++, or Fortran, etc.[20][21]

MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported by ARPA and National Science Foundation. Rather than starting anew, the design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically use TCP/IP and socket connections.[20] MPI is now a widely available communications model that enables parallel programs to be written in languages such as C, Fortran, Python, etc.[21] Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such as MPICH and Open MPI.[21][22]

Cluster management

[edit]
Low-cost and low energy tiny-cluster of Cubieboards, using Apache Hadoop on Lubuntu
A pre-release sample of the Ground Electronics/AB Open Circumference C25 cluster computer system, fitted with 8x Raspberry Pi 3 Model B+ and 1x UDOO x86 boards

One of the challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines, if the cluster has N nodes.[23] In some cases this provides an advantage to shared memory architectures with lower administration costs.[23] This has also made virtual machines popular, due to the ease of administration.[23]

Task scheduling

[edit]

When a large multi-user cluster needs to access very large amounts of data, task scheduling becomes a challenge. In a heterogeneous CPU-GPU cluster with a complex application environment, the performance of each job depends on the characteristics of the underlying cluster. Therefore, mapping tasks onto CPU cores and GPU devices provides significant challenges.[24] This is an area of ongoing research; algorithms that combine and extend MapReduce and Hadoop have been proposed and studied.[24]

Node failure management

[edit]

When a node in a cluster fails, strategies such as "fencing" may be employed to keep the rest of the system operational.[25][26] Fencing is the process of isolating a node or protecting shared resources when a node appears to be malfunctioning. There are two classes of fencing methods; one disables a node itself, and the other disallows access to resources such as shared disks.[25]

The STONITH method stands for "Shoot The Other Node In The Head", meaning that the suspected node is disabled or powered off. For instance, power fencing uses a power controller to turn off an inoperable node.[25]

The resources fencing approach disallows access to resources without powering off the node. This may include persistent reservation fencing via the SCSI3, fibre channel fencing to disable the fibre channel port, or global network block device (GNBD) fencing to disable access to the GNBD server.

Software development and administration

[edit]

Parallel programming

[edit]

Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achieving task parallelism without multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition "the same computation" among several nodes.[27]

Automatic parallelization of programs remains a technical challenge, but parallel programming models can be used to effectuate a higher degree of parallelism via the simultaneous execution of separate portions of a program on different processors.[27][28]

Debugging and monitoring

[edit]

Developing and debugging parallel programs on a cluster requires parallel language primitives and suitable tools such as those discussed by the High Performance Debugging Forum (HPDF) which resulted in the HPD specifications.[21][29] Tools such as TotalView were then developed to debug parallel implementations on computer clusters which use Message Passing Interface (MPI) or Parallel Virtual Machine (PVM) for message passing.

The University of California, Berkeley Network of Workstations (NOW) system gathers cluster data and stores them in a database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters.[21]

Application checkpointing can be used to restore a given state of the system when a node fails during a long multi-node computation.[30] This is essential in large clusters, given that as the number of nodes increases, so does the likelihood of node failure under heavy computational loads. Checkpointing can restore the system to a stable state so that processing can resume without needing to recompute results.[30]

Implementations

[edit]

The Linux world supports various cluster software; for application clustering, there is distcc, and MPICH. Linux Virtual Server, Linux-HA – director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes. MOSIX, LinuxPMI, Kerrighed, OpenSSI are full-blown clusters integrated into the kernel that provide for automatic process migration among homogeneous nodes. OpenSSI, openMosix and Kerrighed are single-system image implementations.

Microsoft Windows computer cluster Server 2003 based on the Windows Server platform provides pieces for high-performance computing like the job scheduler, MSMPI library and management tools.

gLite is a set of middleware technologies created by the Enabling Grids for E-sciencE (EGEE) project.

slurm is also used to schedule and manage some of the largest supercomputer clusters (see top500 list).

Other approaches

[edit]

Although most computer clusters are permanent fixtures, attempts at flash mob computing have been made to build short-lived clusters for specific computations. However, larger-scale volunteer computing systems such as BOINC-based systems have had more followers.

See also

[edit]

Basic concepts

Distributed computing

Specific systems

Computer farms

References

[edit]
  1. ^ "Cluster vs grid computing". Stack Overflow.
  2. ^ a b Graham-Smith, Darien (29 June 2012). "Weekend Project: Build your own supercomputer". PC & Tech Authority. Retrieved 2 June 2017.
  3. ^ Bader, David; Pennington, Robert (May 2001). "Cluster Computing: Applications". Georgia Tech College of Computing. Archived from the original on 2025-08-06. Retrieved 2025-08-06.
  4. ^ "Nuclear weapons supercomputer reclaims world speed record for US". The Telegraph. 18 Jun 2012. Archived from the original on 2025-08-06. Retrieved 18 Jun 2012.
  5. ^ Gray, Jim; Rueter, Andreas (1993). Transaction processing : concepts and techniques. Morgan Kaufmann Publishers. ISBN 978-1558601901.
  6. ^ a b c Enokido, Tomoya; Barolli, Leonhard; Takizawa, Makoto (23 August 2007). Network-Based Information Systems: First International Conference, NBIS 2007. p. 375. ISBN 978-3-540-74572-3.
  7. ^ William W. Hargrove, Forrest M. Hoffman and Thomas Sterling (August 16, 2001). "The Do-It-Yourself Supercomputer". Scientific American. Vol. 265, no. 2. pp. 72–79. Retrieved October 18, 2011.
  8. ^ Hargrove, William W.; Hoffman, Forrest M. (1999). "Cluster Computing: Linux Taken to the Extreme". Linux Magazine. Archived from the original on October 18, 2011. Retrieved October 18, 2011.
  9. ^ Yokokawa, Mitsuo; et al. (1–3 August 2011). The K computer: Japanese next-generation supercomputer development project. International Symposium on Low Power Electronics and Design (ISLPED). pp. 371–372. doi:10.1109/ISLPED.2011.5993668.
  10. ^ Pfister, Gregory (1998). In Search of Clusters (2nd ed.). Upper Saddle River, NJ: Prentice Hall PTR. p. 36. ISBN 978-0-13-899709-0.
  11. ^ Katzman, James A. (1982). "Chapter 29, The Tandem 16: A Fault-Tolerant Computing System". In Siewiorek, Donald P. (ed.). Computer Structure: Principles and Examples. U.S.A.: McGraw-Hill Book Company. pp. 470–485.
  12. ^ "History of TANDEM COMPUTERS, INC. – FundingUniverse". www.fundinguniverse.com. Retrieved 2025-08-06.
  13. ^ Hill, Mark Donald; Jouppi, Norman Paul; Sohi, Gurindar (1999). Readings in computer architecture. Gulf Professional. pp. 41–48. ISBN 978-1-55860-539-8.
  14. ^ a b Sloan, Joseph D. (2004). High Performance Linux Clusters. "O'Reilly Media, Inc.". ISBN 978-0-596-00570-2.
  15. ^ a b c d Daydé, Michel; Dongarra, Jack (2005). High Performance Computing for Computational Science – VECPAR 2004. Springer. pp. 120–121. ISBN 978-3-540-25424-9.
  16. ^ "IBM Cluster System : Benefits". IBM. Archived from the original on 29 April 2016. Retrieved 8 September 2014.
  17. ^ "Evaluating the Benefits of Clustering". Microsoft. 28 March 2003. Archived from the original on 22 April 2016. Retrieved 8 September 2014.
  18. ^ Hamada, Tsuyoshi; et al. (2009). "A novel multiple-walk parallel algorithm for the Barnes–Hut treecode on GPUs – towards cost effective, high performance N-body simulation". Computer Science – Research and Development. 24 (1–2): 21–31. doi:10.1007/s00450-009-0089-1. S2CID 31071570.
  19. ^ a b Mauer, Ryan (12 Jan 2006). "Xen Virtualization and Linux Clustering, Part 1". Linux Journal. Retrieved 2 Jun 2017.
  20. ^ a b c Milicchio, Franco; Gehrke, Wolfgang Alexander (2007). Distributed services with OpenAFS: for enterprise and education. Springer. pp. 339–341. ISBN 9783540366348.
  21. ^ a b c d e Prabhu, C.S.R. (2008). Grid and Cluster Computing. PHI Learning Pvt. pp. 109–112. ISBN 978-8120334281.
  22. ^ Gropp, William; Lusk, Ewing; Skjellum, Anthony (1996). "A High-Performance, Portable Implementation of the MPI Message Passing Interface". Parallel Computing. 22 (6): 789–828. CiteSeerX 10.1.1.102.9485. doi:10.1016/0167-8191(96)00024-5.
  23. ^ a b c Patterson, David A.; Hennessy, John L. (2011). Computer Organization and Design. Elsevier. pp. 641–642. ISBN 978-0-12-374750-1.
  24. ^ a b K. Shirahata; et al. (30 Nov – 3 Dec 2010). Hybrid Map Task Scheduling for GPU-Based Heterogeneous Clusters. Cloud Computing Technology and Science (CloudCom). pp. 733–740. doi:10.1109/CloudCom.2010.55. ISBN 978-1-4244-9405-7.
  25. ^ a b c "Alan Robertson Resource fencing using STONITH" (PDF). IBM Linux Research Center, 2010. Archived from the original (PDF) on 2025-08-06.
  26. ^ Vargas, Enrique; Bianco, Joseph; Deeths, David (2001). Sun Cluster environment: Sun Cluster 2.2. Prentice Hall Professional. p. 58. ISBN 9780130418708.
  27. ^ a b Aho, Alfred V.; Blum, Edward K. (2011). Computer Science: The Hardware, Software and Heart of It. Springer. pp. 156–166. ISBN 978-1-4614-1167-3.
  28. ^ Rauber, Thomas; Rünger, Gudula (2010). Parallel Programming: For Multicore and Cluster Systems. Springer. pp. 94–95. ISBN 978-3-642-04817-3.
  29. ^ Francioni, Joan M.; Pancake, Cherri M. (April 2000). "A Debugging Standard for High-performance computing". Scientific Programming. 8 (2). Amsterdam, Netherlands: IOS Press: 95–108. doi:10.1155/2000/971291. ISSN 1058-9244.
  30. ^ a b Sloot, Peter, ed. (2003). Computational Science: ICCS 2003: International Conference. pp. 291–292. ISBN 3-540-40195-4.

Further reading

[edit]
[edit]
脑震荡后眩晕吃什么药 蚊子除了吸血还吃什么 ck医学上是什么意思 超声科检查什么 12月18是什么星座
血糖高吃什么降得快 1997年属什么 cr5是什么意思 谁也不知道下一秒会发生什么 小儿呕吐是什么原因引起的
补充免疫力吃什么好 五心烦热是什么症状 肩膀骨头疼是什么原因 早起眼皮肿是什么原因引起的 肠胃不好吃什么药最好
三围是什么 reed是什么意思 swisse是什么药 豆角和什么不能一起吃 馥是什么意思
3人死亡属于什么事故hcv8jop1ns1r.cn 止血芳酸又叫什么bfb118.com 欧米茄什么意思hcv8jop0ns1r.cn 避孕套什么牌子好用又安全hcv8jop3ns3r.cn 头昏是什么原因引起的hcv9jop5ns4r.cn
核磁dwi是什么意思weuuu.com 谷维素片是治什么病的zhongyiyatai.com 背靠背是什么意思hcv7jop6ns4r.cn 手抖头抖是什么病hcv8jop1ns0r.cn 龟苓膏有什么功效hcv8jop5ns0r.cn
梦见捡钱了是什么预兆hcv8jop2ns3r.cn 为什么今年这么热hcv8jop5ns1r.cn 三个七念什么hcv9jop0ns1r.cn 眉茶属于什么茶hcv8jop9ns5r.cn 为什么科比叫黑曼巴hcv9jop6ns8r.cn
让心归零是什么意思weuuu.com 梦到上坟是什么意思hcv8jop8ns7r.cn 天狼星在什么位置xscnpatent.com 水中毒是什么症状hcv8jop1ns9r.cn iga肾病是什么意思huizhijixie.com
百度