唵是什么意思| 宫颈糜烂用什么药好| 马桶对着卫生间门有什么不好| emba是什么| 慷他人之慨什么意思| dj管是什么| 什么时候测试怀孕最准确的| 手不什么什么| 牙齿松动是什么原因| 公积金基数是什么意思| 手麻脚麻是什么原因引起的| 真菌怕什么消毒液| 鸭肉煲汤放什么材料好| 宫颈糜烂用什么药好| 高压氧治疗有什么作用| 属兔的婚配什么属相好| 月经总是提前是什么原因| 便秘吃什么中药| 儿童包皮过长挂什么科| 乐趣是什么意思| 蓝莓什么味道| 为什么屎是黑色的| 什么水果补钙| icd医学上是什么意思| 内火重吃什么药见效快| 情趣是什么| 寒咳嗽吃什么药止咳效果好| 郑五行属什么| 三阳开泰是什么生肖| 迷妹是什么意思| hrs是什么意思| 太后是什么意思| 怀孕吃什么宝宝会白| 雌二醇是什么意思| 蒋字五行属什么| 子宫内膜厚是什么原因| 权字五行属什么| 藕不能和什么一起吃| 猎奇什么意思| 927什么星座| ptsd是什么意思| 罕见是什么意思| 阴蒂瘙痒是什么原因| 撤退性出血是什么颜色| 脾胃虚寒有什么症状| 温文尔雅是什么意思| 吕洞宾属什么生肖| 声嘶力竭是什么意思| 商纣王叫什么名字| 什么全什么美| 拉肚子是什么原因| 唐宋元明清前面是什么| 梦见嫂子是什么意思| bp是什么| 什么是尿酸| 男人不尊重你说明什么| 拉痢疾是什么症状| 身上长黑痣是什么原因| 浑浑噩噩是什么意思| 全日制专科是什么意思| plory是什么牌子| 乳粉是什么| 呼吸困难是什么原因引起的| 乙肝通过什么途径传染| 空调一级能效什么意思| 极核是什么| 亚临床甲减是什么意思| 鸡蛋为什么不能放冰箱| 眼睛长结石是什么原因引起的| 孕妇感冒了可以吃什么药| 高大的动物是什么生肖| 老豆是什么意思| 199是什么意思| 未见胎芽是什么意思| 同房是什么意思| 羟基维生素d是什么| 葡萄像什么| 一个虫一个圣读什么| 东南方是什么生肖| 静脉曲张挂什么科| 肚脐眼为什么会有臭味| 尿素高不能吃什么| 菊花和什么一起泡最好| 极性什么意思| 2008年是属什么| 大爱是什么意思| 吃什么瘦肚子最快| 人活着的意义是什么| gaba是什么| 老年人反复发烧是什么原因引起的| 止血敏又叫什么名| 78什么意思| 注意力不集中是什么原因| 汗斑是什么原因引起的| 似乎是什么意思| 玫瑰花可以和什么一起泡水喝| 勇敢的什么| 拿东西手抖是什么原因| 温水煮青蛙是什么意思| 剪发虫是什么| 38节送什么礼物| ng是什么单位| 四眼狗有什么迷信说法| 老鼠属于什么类动物| soda是什么意思啊| 实相是什么意思| 乙肝两对半45阳性是什么意思| 为什么会有狐臭| 怀孕为什么会流血| 什么植物和动物很像鸡| 喝酒前喝什么不容易醉| 塞飞洛是什么档次的包| 喝绿茶有什么好处| 七月二十九是什么星座| 脚腕肿是什么原因| 土霉素主要是治疗什么病| 淫羊藿是什么| bishop是什么意思| 疮疡是什么意思| 陈惠敏和陈慧琳什么关系| 男人勃不起是什么原因造成的| 打眼是什么意思| 巴马汤泡脚有什么功效| 升天是什么意思| 没有胎心胎芽是什么原因造成的| 肝右叶占位是什么意思| 111是什么意思| 临床治愈什么意思| 一什么桃花| 脚臭是什么原因| 黄色配什么颜色最好看| 立春吃什么食物| 祖坟冒青烟是什么意思| 一加一为什么等于二| 韩红什么军衔| scr是什么意思| 心肌缺血挂什么科| 哈衣是什么意思| 兵马未动粮草先行是什么意思| 小狗不能吃什么| 什么药可以帮助睡眠| 腱鞘炎用什么药| 山楂有什么功效和作用| 什么叫肌酸激酶| 红色加黄色等于什么颜色| 什么情况下会感染hpv病毒| 18罗汉都叫什么名字| 救世主是什么意思| 甲状腺结节吃什么盐| 天下乌鸦一般黑是什么生肖| 胃幽门螺旋杆菌吃什么药| 鼻子老是出血是什么原因| 有什么鱼| 男性手心热是什么原因| 红色玫瑰花代表什么意思| 维生素b5又叫什么| 画龙点睛指什么生肖| 大便不调是什么意思| 路征和景甜什么关系| 开天眼是什么意思| 金桔什么时候开花结果| 9点多是什么时辰| 一直发烧不退是什么原因| 1114是什么星座| 嘴唇有黑斑是什么病| 高山茶属于什么茶| 左手食指有痣代表什么| 日落是什么时辰| 梦见和死人说话是什么意思| 七叶子是什么意思| 肠脂膜炎是什么病严重吗| y是什么意思| 什么地飞翔| 月经期适合做什么运动| 众生是什么意思| 男人吃韭菜有什么好处| 淋漓不尽是什么意思| 碱水是什么| 雪茄是什么| 喜面是什么意思| longines是什么牌子| 甲状腺功能减退是什么原因引起的| 抱窝是什么意思| 葳蕤是什么意思| 海石花是什么| 日语斯国一是什么意思| 赤道2什么时候上映| 月经前一周失眠是什么原因| 新生儿dha什么时候开始吃| 为什么会长白头发| ifashion是什么意思| 植物神经是什么| zoom是什么| h什么意思| 划扣是什么意思| 五十八岁属什么生肖| 阳虚吃什么药效果最好| 天牛喜欢吃什么| 头发少剪什么发型好看| 梦见螃蟹是什么预兆| 老汉推车是什么意思| 今天什么日| 英纳格手表什么档次| 男人腰痛吃什么药| 12月2号什么星座| 排骨炖什么汤好喝| 后入是什么意思| 脾喜欢什么食物| 吃了龙虾后不能吃什么| 丁丁历险记的狗是什么品种| 疝气看病挂什么科| 辣椒是什么科| 给小孩买什么保险好| b2驾照能开什么车| 什么情况吃通宣理肺丸| 内分泌失调吃什么药| 什么粥最养胃| 7月1号什么节| 龟头感染用什么药| 血红素高是什么原因| 胰岛a细胞分泌什么激素| 胸外扩是什么样子| 太傅是什么官| 煮海带放什么容易烂| 性张力什么意思| 花胶是什么| 什么是盗汗| 心与什么相表里| 背部爱出汗是什么原因| 庚金是什么意思| 小孩吃什么补脑更聪明| 大便粘稠是什么原因| 兔和什么生肖最配| 相什么并什么| 云为什么是白色的| 手机有什么品牌| 超脱是什么意思| 湿浊中阻是什么意思| pr过高是什么意思| 宝字五行属什么| 左下腹痛是什么原因| 洁面膏和洗面奶有什么区别| 冰箱不制冷是什么原因| 玻璃属于什么垃圾| 红糖不能和什么一起吃| 血糖高早饭吃什么最好| 美国为什么不敢打伊朗| 咕咾肉是什么肉| 峰会什么时候开| 梦见牙掉了是什么意思| 得逞是什么意思| 星期三左眼皮跳是什么预兆| 医院挂号用什么app| 高频听力损失意味什么| 逆转是什么意思| 四氯化碳是什么| 小孩流鼻涕吃什么药| 15年婚姻是什么婚| 无病呻吟是什么意思| 直立倾斜试验阳性是什么病| 地米是什么药| 病人化疗期间吃什么好| 排骨用什么炖好吃| 什么是伤官| 百度Jump to content

大同“粮改饲” 让贫困户尝到甜头

From Wikipedia, the free encyclopedia
(Redirected from Contiguous data storage)
百度 大家不妨仔细阅读一下旅游指南,机智出行!8、艾菲尔铁塔因为楼梯太冰而暂时关闭!据《赫芬顿邮报》报道,近日的艾菲尔铁塔因为楼梯太冰而暂时关闭,罪魁祸首是法国北部的强降雪和冻雨。

In computer storage, fragmentation is a phenomenon in the computer system which involves the distribution of data in to smaller pieces which storage space, such as computer memory or a hard drive, is used inefficiently, reducing capacity or performance and often both. The exact consequences of fragmentation depend on the specific system of storage allocation in use and the particular form of fragmentation. In many cases, fragmentation leads to storage space being "wasted", and programs will tend to run inefficiently due to the shortage of memory.

Basic principle

[edit]

In main memory fragmentation, when a computer program requests blocks of memory from the computer system, the blocks are allocated in chunks. When the computer program is finished with a chunk, it can free it back to the system, making it available to later be allocated again to another or the same program. The size and the amount of time a chunk is held by a program varies. During its lifespan, a computer program can request and free many chunks of memory.

Fragmentation can occur when a block of memory is requested by a program, and is allocated to that program, but the program has not freed it.[1] This leads to theoretically "available", unused memory, being marked as allocated - which reduces the amount of globally available memory, making it harder for programs to request and access memory.

When a program is started, the free memory areas are long and contiguous. Over time and with use, the long contiguous regions become fragmented into smaller and smaller contiguous areas. Eventually, it may become impossible for the program to obtain large contiguous chunks of memory.

Types

[edit]

There are three different but related forms of fragmentation: external fragmentation, internal fragmentation, and data fragmentation, which can be present in isolation or conjunction. Fragmentation is often accepted in return for improvements in speed or simplicity. Analogous phenomena occur for other resources such as processors; see below.

Internal fragmentation

[edit]

Memory paging creates internal fragmentation because an entire page frame will be allocated whether or not that much storage is needed.[2] Due to the rules governing memory allocation, more computer memory is sometimes allocated than is needed. For example, memory can only be provided to programs in chunks (usually a multiple of 4 bytes), and as a result if a program requests perhaps 29 bytes, it will actually get a chunk of 32 bytes. When this happens, the excess memory goes to waste. In this scenario, the unusable memory, known as slack space, is contained within an allocated region. This arrangement, termed fixed partitions, suffers from inefficient memory use - any process, no matter how small, occupies an entire partition. This waste is called internal fragmentation.[3][4]

Unlike other types of fragmentation, internal fragmentation is difficult to reclaim; usually the best way to remove it is with a design change. For example, in dynamic memory allocation, memory pools drastically cut internal fragmentation by spreading the space overhead over a larger number of objects.

External fragmentation

[edit]

External fragmentation arises when free memory is separated into small blocks and is interspersed by allocated memory. It is a weakness of certain storage allocation algorithms, when they fail to order memory used by programs efficiently. The result is that, although free storage is available, it is effectively unusable because it is divided into pieces that are too small individually to satisfy the demands of the application. The term "external" refers to the fact that the unusable storage is outside the allocated regions.

For example, consider a situation wherein a program allocates three continuous blocks of memory and then frees the middle block. The memory allocator can use this free block of memory for future allocations. However, it cannot use this block if the memory to be allocated is larger in size than this free block.

External fragmentation also occurs in file systems as many files of different sizes are created, change size, and are deleted. The effect is even worse if a file which is divided into many small pieces is deleted, because this leaves similarly small regions of free spaces.

Time 0x0000 0x1000 0x2000 0x3000 0x4000 0x5000 Comments
0 Start with all memory available for storage.
1 A B C Allocated three blocks A, B, and C, of size 0x1000.
2 A C Freed block B. Notice that the memory that B used cannot be included for a block larger than B's size.
3 A C Block C moved into block B's empty slot, allowing the remaining space to be used for a larger block of size 0x4000.

Data fragmentation

[edit]

Data fragmentation occurs when a collection of data in memory is broken up into many pieces that are not close together. It is typically the result of attempting to insert a large object into storage that has already suffered external fragmentation. For example, files in a file system are usually managed in units called blocks or clusters. When a file system is created, there is free space to store file blocks together contiguously. This allows for rapid sequential file reads and writes. However, as files are added, removed, and changed in size, the free space becomes externally fragmented, leaving only small holes in which to place new data. When a new file is written, or when an existing file is extended, the operating system puts the new data in new non-contiguous data blocks to fit into the available holes. The new data blocks are necessarily scattered, slowing access due to seek time and rotational latency of the read/write head, and incurring additional overhead to manage additional locations. This is called file system fragmentation.

When writing a new file of a known size, if there are any empty holes that are larger than that file, the operating system can avoid data fragmentation by putting the file into any one of those holes. There are a variety of algorithms for selecting which of those potential holes to put the file; each of them is a heuristic approximate solution to the bin packing problem. The "best fit" algorithm chooses the smallest hole that is big enough. The "worst fit" algorithm chooses the largest hole. The "first-fit algorithm" chooses the first hole that is big enough. The "next fit" algorithm keeps track of where each file was written. The "next fit" algorithm is faster than "first fit," which is in turn faster than "best fit," which is the same speed as "worst fit".[5]

Just as compaction can eliminate external fragmentation, data fragmentation can be eliminated by rearranging data storage so that related pieces are close together. For example, the primary job of a defragmentation tool is to rearrange blocks on disk so that the blocks of each file are contiguous. Most defragmenting utilities also attempt to reduce or eliminate free space fragmentation. Some moving garbage collectors, utilities that perform automatic memory management, will also move related objects close together (this is called compacting) to improve cache performance.

There are four kinds of systems that never experience data fragmentation—they always store every file contiguously. All four kinds have significant disadvantages compared to systems that allow at least some temporary data fragmentation:

  1. Simply write each file contiguously. If there isn't already enough contiguous free space to hold the file, the system immediately fails to store the file—even when there are many little bits of free space from deleted files that add up to more than enough to store the file.
  2. If there isn't already enough contiguous free space to hold the file, use a copying collector to convert many little bits of free space into one contiguous free region big enough to hold the file. This takes a lot more time than breaking the file up into fragments and putting those fragments into the available free space.
  3. Write the file into any free block, through fixed-size blocks storage. If a programmer picks a fixed block size too small, the system immediately fails to store some files—files larger than the block size—even when there are many free blocks that add up to more than enough to store the file. If a programmer picks a block size too big, a lot of space is wasted on internal fragmentation.
  4. Some systems avoid dynamic allocation entirely, pre-storing (contiguous) space for all possible files they will need—for example, MultiFinder pre-allocates a chunk of RAM to each application as it was started according to how much RAM that application's programmer claimed it would need.

Comparison

[edit]

Compared to external fragmentation, overhead and internal fragmentation account for little loss in terms of wasted memory and reduced performance. It is defined as:

Fragmentation of 0% means that all the free memory is in a single large block; fragmentation is 90% (for example) when 100 MB free memory is present but largest free block of memory for storage is just 10 MB.

External fragmentation tends to be less of a problem in file systems than in primary memory (RAM) storage systems, because programs usually require their RAM storage requests to be fulfilled with contiguous blocks, but file systems typically are designed to be able to use any collection of available blocks (fragments) to assemble a file which logically appears contiguous. Therefore, if a highly fragmented file or many small files are deleted from a full volume and then a new file with size equal to the newly freed space is created, the new file will simply reuse the same fragments that were freed by the deletion. If what was deleted was one file, the new file will be just as fragmented as that old file was, but in any case there will be no barrier to using all the (highly fragmented) free space to create the new file. In RAM, on the other hand, the storage systems used often cannot assemble a large block to meet a request from small noncontiguous free blocks, and so the request cannot be fulfilled and the program cannot proceed to do whatever it needed that memory for (unless it can reissue the request as a number of smaller separate requests).

Problems

[edit]

Storage failure

[edit]

The most severe problem caused by fragmentation is causing a process or system to fail, due to premature resource exhaustion: if a contiguous block must be stored and cannot be stored, failure occurs. Fragmentation causes this to occur even if there is enough of the resource, but not a contiguous amount. For example, if a computer has 4 GiB of memory and 2 GiB are free, but the memory is fragmented in an alternating sequence of 1 MiB used, 1 MiB free, then a request for 1 contiguous GiB of memory cannot be satisfied even though 2 GiB total are free.

In order to avoid this, the allocator may, instead of failing, trigger a defragmentation (or memory compaction cycle) or other resource reclamation, such as a major garbage collection cycle, in the hope that it will then be able to satisfy the request. This allows the process to proceed, but can severely impact performance.

Performance degradation

[edit]

Fragmentation causes performance degradation for a number of reasons. Most basically, fragmentation increases the work required to allocate and access a resource. For example, on a hard drive or tape drive, sequential data reads are very fast, but seeking to a different address is slow, so reading or writing a fragmented file requires numerous seeks and is thus much slower, in addition to causing greater wear on the device. Further, if a resource is not fragmented, allocation requests can simply be satisfied by returning a single block from the start of the free area. However, if it is fragmented, the request requires either searching for a large enough free block, which may take a long time, or fulfilling the request by several smaller blocks (if this is possible), which results in this allocation being fragmented, and requiring additional overhead to manage the several pieces.

A subtler problem is that fragmentation may prematurely exhaust a cache, causing thrashing, due to caches holding blocks, not individual data. For example, suppose a program has a working set of 256 KiB, and is running on a computer with a 256 KiB cache (say L2 instruction+data cache), so the entire working set fits in cache and thus executes quickly, at least in terms of cache hits. Suppose further that it has 64 translation lookaside buffer (TLB) entries, each for a 4 KiB page: each memory access requires a virtual-to-physical translation, which is fast if the page is in cache (here TLB). If the working set is unfragmented, then it will fit onto exactly 64 pages (the page working set will be 64 pages), and all memory lookups can be served from cache. However, if the working set is fragmented, then it will not fit into 64 pages, and execution will slow due to thrashing: pages will be repeatedly added and removed from the TLB during operation. Thus cache sizing in system design must include margin to account for fragmentation.

Memory fragmentation is one of the most severe problems faced by system managers.[citation needed] Over time, it leads to degradation of system performance. Eventually, memory fragmentation may lead to complete loss of (application-usable) free memory.

Memory fragmentation is a kernel programming level problem. During real-time computing of applications, fragmentation levels can reach as high as 99%, and may lead to system crashes or other instabilities.[citation needed] This type of system crash can be difficult to avoid, as it is impossible to anticipate the critical rise in levels of memory fragmentation. However, while it may not be possible for a system to continue running all programs in the case of excessive memory fragmentation, a well-designed system should be able to recover from the critical fragmentation condition by moving in some memory blocks used by the system itself in order to enable consolidation of free memory into fewer, larger blocks, or, in the worst case, by terminating some programs to free their memory and then defragmenting the resulting sum total of free memory. This will at least avoid a true crash in the sense of system failure and allow the system to continue running some programs, save program data, etc.

Fragmentation is a phenomenon of system software design; different software will be susceptible to fragmentation to different degrees, and it is possible to design a system that will never be forced to shut down or kill processes as a result of memory fragmentation.

Analogous phenomena

[edit]

While fragmentation is best known as a problem in memory allocation, analogous phenomena occur for other resources, notably processors.[6] For example, in a system that uses time-sharing for preemptive multitasking, but that does not check if a process is blocked, a process that executes for part of its time slice but then blocks and cannot proceed for the remainder of its time slice wastes time because of the resulting internal fragmentation of time slices. More fundamentally, time-sharing itself causes external fragmentation of processes due to running them in fragmented time slices, rather than in a single unbroken run. The resulting cost of process switching and increased cache pressure from multiple processes using the same caches can result in degraded performance.

In concurrent systems, particularly distributed systems, when a group of processes must interact in order to progress, if the processes are scheduled at separate times or on separate machines (fragmented across time or machines), the time spent waiting for each other or in communicating with each other can severely degrade performance. Instead, performant systems require coscheduling of the group.[6]

Some flash file systems have several different kinds of internal fragmentation involving "dead space" and "dark space".[7]

See also

[edit]

References

[edit]
  1. ^ "CS360 Lecture notes -- Fragmentation". web.eecs.utk.edu. Retrieved 2025-08-07.
  2. ^ Null, Linda; Lobur, Julia (2006). The Essentials of Computer Organization and Architecture. Jones and Bartlett Publishers. p. 315. ISBN 9780763737696. Retrieved Jul 15, 2021.
  3. ^ "Partitioning, Partition Sizes and Drive Lettering". The PC Guide. April 17, 2001. Retrieved 2025-08-07.
  4. ^ "Switches: Sector copy". Symantec. 2025-08-07. Archived from the original on July 19, 2012. Retrieved 2025-08-07.
  5. ^ D. Samanta. "Classic Data Structures" 2004. p. 76
  6. ^ a b Ousterhout, John K. (1982). "Scheduling Techniques for Concurrent Systems" (PDF). Proceedings of Third International Conference on Distributed Computing Systems. pp. 22–30.
  7. ^ Hunter, Adrian (2008). "A Brief Introduction to the Design of UBIFS" (PDF). p. 8.

Sources

[edit]
ab型血可以输什么血 急性肠胃炎吃什么消炎药 肺结节是一种什么病 山竹为什么这么贵 甲功是什么
橄榄枝象征着什么 石敢当是什么意思 高血压吃什么助勃药好 老鼠喜欢吃什么 成人大便绿色是什么原因
失心疯是什么意思 孕妇血糖高可以吃什么水果 尾椎骨痛挂什么科 肝炎吃什么药好 孕妇拉肚子可以吃什么药
高光是什么意思 民航是什么意思 十万个为什么作者是谁 头部出汗多吃什么药 印尼用什么货币
肾阳虚是什么意思dayuxmw.com 刀子嘴豆腐心什么意思hcv8jop5ns6r.cn 6月12日什么星座hcv9jop8ns0r.cn 天珠是什么做的hcv8jop6ns0r.cn 月经不来是什么原因导致的96micro.com
什么奶粉跟母乳一个味hcv9jop2ns3r.cn bbc是什么意思adwl56.com 临床药学是干什么的hcv8jop3ns4r.cn aids是什么意思hcv8jop3ns8r.cn 罗马棉是什么面料hcv7jop6ns0r.cn
牙齿黄用什么牙膏hcv8jop6ns5r.cn 家里停电打什么电话hcv9jop1ns8r.cn 反流性食管炎吃什么食物好hcv8jop4ns9r.cn 吃维生素b2有什么好处和副作用hcv8jop5ns9r.cn 柳丁是什么水果hcv8jop8ns4r.cn
经常呛咳是什么病的征兆liaochangning.com 维生素b族什么时候吃hcv8jop6ns7r.cn 小211是什么意思hcv9jop6ns4r.cn 痴女是什么意思bjhyzcsm.com 心脏缺血吃什么补的快hcv7jop4ns5r.cn
百度