language agnostic - How can standards guarantee a data structure uses contiguous memory? -


I was wondering how memory works, like language standards (such as the C ++ ISO / ANSI standard) You can give that any data structure (even an array too) will be narrowed.

I do not even know how to write the data structure using casual memory, but can you give me a short example of how the designer can do it? For example, assuming C + + to std :: vector allocates all of its memory over the runtime, how will it know that the memory slot is not in use beyond the current allocated memory (and thus, free for use By vector)? Do the vectors look far ahead and expect that the user does not try and pushback many objects that it can no longer store it in any composite memory block? Or does the operating system roam in memory, it will save you from becoming a problem (how will it know how it will work)?

Your question is about how to understand the concept of memory allocation from the view of the starting points. Try to explain to me what is going on in a very simple way. For example, we can think of a C ++ program that adds a lot of elements to a std :: vector .

When the program starts the C ++ runtime, allocate some memory to the operating system. This piece of memory is called the heap and it is used when the ++ Program requires dynamic memory. Initially most of the piles are unused, but calling new and malloc will create blocks of memory on a heap. The heap used internally and uses some bookkeeping information to keep track of Free Arps.

How exactly std :: vector behaves internally depends on the implementation, but in general it will allocate a buffer for elements of the vector on the heap. It is large enough to accommodate all the elements in the buffer vector, but at the most end there is some free space at the end. There is a buffer that stores 5 elements and there is enough space for 8 elements. The address on the Buffer Stack is located at 1000.

  1000: xxxxx _ _ _  

std :: vector vector (5) and buffer size (8) and Keeps track of the number of elements in place (1000).

Here is the buffer vector after adding a new element:

  1000: XXXXXX _ _  push_back  > 

It can be done twice as long as the buffer is not used in all places.

  1000: XXXXXXXX  

But what happens if push_back is called again? Increasing the vector size of the buffer buffer is allocated on the heap and if the area after the buffer is unused, it can be really possible to increase the buffer. However, most of the time, memory has been allocated to any other object. It keeps track of some heaps to be able to grow buffer for vector, it has to be allocated with a completely new buffer allotted size. Many implementations will double the size of the buffer. Here is a new buffer that stores 9 elements and has space for 16 elements. The new buffer has been allocated on the heap at 2000:

  2000: XXXXXXXXX _ _ _ _ _ _ _  

The content of the old buffer is new The buffer is copied, and if this buffer is large then this operation can be expensive.

If you think that the heap may increase even when the program is running, as soon as the different blocks allocated on the pile can increase. This will increase the memory consumption of the program. As more and more elements are added to the vector, the pile will be increased until the operating system refuses to increase the size of the stack.

  • When the operating system exits the memory condition

    • The operating system supplies memory for the heap, which can increase the size of the operating system Limits are over.
    • You can allocate memory allocation routines in C ++ and block fixed size of the stack for free.
    • std :: vector will prefer a buffer to allow the vector to grow, but if the vector grows higher than the buffer size, it will allocate a new buffer and This new buffer will copy the entire contents of the vector.

Comments

Popular posts from this blog

c# - ListView onScroll event -

PHP - get image from byte array -

Linux Terminal Problem with Non-Canonical Terminal I/O app -