Memory Management in Constrained Environments
https://www.imustread.com/2019/11/memory-management-in-constrained-environments.html?m=0
There are environments in which memory is a precious resource, and it is often limited. There are also other environments in which performance is a key factor and programs should be fast no matter how much memory we have. Each of these environments requires a specific technique to overcome the memory shortage and performance degradation from a memory management perspective. So, we need to know what a constrained environment is.
This article is an excerpt from the book Extreme C by Kamran Amini. C still plays a critical role in 21st-century programming, remaining the core language for precision engineering, aviation, space research, and more. Kamran teaches you to boost your existing C knowledge. You will gain new insight into the techniques used in low memory environments for overcoming the shortage issue.
A constrained environment does not necessarily have a low memory capacity. There are usually some constraints that limit the memory usage for a program. These constraints can be your customer’s hard limits regarding memory usage, or it could be because of hardware that provides the low memory capacity, or it can be because of an operating system that does not support a bigger memory (for example, MS-DOS). Even, if there are no constraints or hardware limitations, we as programmers try our best to use the least possible amount of memory and use it in an optimal way. Memory consumption is one of the key non-functional requirements in a project and should be monitored and tuned carefully.
In these environments, limited memory is always a constraint, and algorithms should be designed in a way in order to cope with memory shortages. Embedded systems with a memory size of tens to hundreds of megabytes are usually in this category. There are a few tips about memory management in such environments, but none of them work as well as having a nicely tuned algorithm. In this case, algorithms with a low memory complexity are usually used. These algorithms usually have a higher time complexity, which should be traded off with their low memory usage.
To elaborate more on this, every algorithm has a specific time and memory complexities. Time complexity describes the relationship between the input size and the time that the algorithm takes to complete. Similarly, memory complexity describes the relationship between the input size and the memory that the algorithm consumes to complete its task. These complexities are usually denoted as Big-O functions, which we don’t want to deal with in this section. Our discussion is qualitative, so we don’t need any math to talk about memory-constrained environments.
An algorithm should ideally have a low time complexity and a low memory complexity. In other words, having a fast algorithm consuming a low amount of memory is highly desirable, but it is unusual to have this “best of both worlds” situation. It is also unexpected to have an algorithm with high memory consumption while not performing well (unless the programmer is a novice).
Most of the time, we have a trade-off between memory and speed, which represents time. As an example, a sorting algorithm that is faster than another algorithm usually consumes more memory than the other, even though both do the same job.
It is a good but conservative practice, especially when writing a program, to assume that we are writing code for a memory-constrained system, even if we know that we will have more than enough memory in the final production environment. We make this assumption because we want to mitigate the risk of having too much memory consumption.
Note that the driving force behind this assumption should be controlled and adjusted based on an accurate guess about the average memory availability, in terms of size, as part of the final setup. Keep in mind that using memory with a soft and reasonable limit, when enough memory is available, can boost performance. Algorithms designed for memory-constrained environments are intrinsically slower, and you should be careful about this trap.
In the upcoming sections, we will cover some techniques that can help us to collect some wasted memory or to use less memory in memory-constrained environments.
One of the easiest ways to use less memory is to use packed structures. Packed structures discard the memory alignment and they have a more compact memory layout for storing their fields.
Using packed structures is a trade-off. You consume less memory because you discard memory alignments and eventually end up with more memory read time while loading a structure variable. This will result in a slower program. This method is simple but not recommended for all programs.
This is an effective technique, especially for programs working with a lot of textual data that should be kept inside the memory. Textual data has a high compression ratio in comparison to binary data. This technique allows a program to store the compressed form instead of the actual text data with a huge memory return.
However, saving memory is not free; since compression algorithms are CPU-bound and computation-intensive, the program would have worse performance in the end. This method is ideal for programs that keep textual data that is not required often; otherwise, a lot of compression/decompression operations are needed, and the program would be almost unusable eventually.
Using external data storage in the forms of a network service, a cloud infrastructure, or simply a hard drive is a very common and useful technique for resolving low memory issues. Since it is usually considered that a program might be run in a limited or low memory environment, there are a lot of examples that use this method to be able to consume less memory even in environments in which enough memory is available.
This technique usually assumes that memory is not the main storage, but it acts as cache memory. Another assumption is that we cannot keep the whole data in the memory and at any moment, only a portion of data or a page of data can be loaded into the memory.
These algorithms are not directly addressing the low memory problem, but they are trying to solve another issue: slow external data storage. External data storage is always too slow in comparison to the main memory. So, the algorithms should balance the reads from the external data store and their internal memory. All database services, such as PostgreSQL and Oracle, use this technique.
In most projects, it is not very wise to design and write these algorithms from scratch because these algorithms are not that trivial and simple to write. The teams behind famous libraries such as SQLite have been fixing bugs for years. If you need to access an external data storage such as a file, a database, or a host on the network while having a low memory footprint, there are always options out there for you.
In this article, we briefly discussed memory-constrained environments, and we saw how memory tuning can be done in these environments. Extreme C will act as a high-intensity guide to the most advanced capabilities of C for programmers to push their limits.
This article is an excerpt from the book Extreme C by Kamran Amini. C still plays a critical role in 21st-century programming, remaining the core language for precision engineering, aviation, space research, and more. Kamran teaches you to boost your existing C knowledge. You will gain new insight into the techniques used in low memory environments for overcoming the shortage issue.
A constrained environment does not necessarily have a low memory capacity. There are usually some constraints that limit the memory usage for a program. These constraints can be your customer’s hard limits regarding memory usage, or it could be because of hardware that provides the low memory capacity, or it can be because of an operating system that does not support a bigger memory (for example, MS-DOS). Even, if there are no constraints or hardware limitations, we as programmers try our best to use the least possible amount of memory and use it in an optimal way. Memory consumption is one of the key non-functional requirements in a project and should be monitored and tuned carefully.
Memory-Constrained Environments
In these environments, limited memory is always a constraint, and algorithms should be designed in a way in order to cope with memory shortages. Embedded systems with a memory size of tens to hundreds of megabytes are usually in this category. There are a few tips about memory management in such environments, but none of them work as well as having a nicely tuned algorithm. In this case, algorithms with a low memory complexity are usually used. These algorithms usually have a higher time complexity, which should be traded off with their low memory usage.
To elaborate more on this, every algorithm has a specific time and memory complexities. Time complexity describes the relationship between the input size and the time that the algorithm takes to complete. Similarly, memory complexity describes the relationship between the input size and the memory that the algorithm consumes to complete its task. These complexities are usually denoted as Big-O functions, which we don’t want to deal with in this section. Our discussion is qualitative, so we don’t need any math to talk about memory-constrained environments.
An algorithm should ideally have a low time complexity and a low memory complexity. In other words, having a fast algorithm consuming a low amount of memory is highly desirable, but it is unusual to have this “best of both worlds” situation. It is also unexpected to have an algorithm with high memory consumption while not performing well (unless the programmer is a novice).
Most of the time, we have a trade-off between memory and speed, which represents time. As an example, a sorting algorithm that is faster than another algorithm usually consumes more memory than the other, even though both do the same job.
It is a good but conservative practice, especially when writing a program, to assume that we are writing code for a memory-constrained system, even if we know that we will have more than enough memory in the final production environment. We make this assumption because we want to mitigate the risk of having too much memory consumption.
Note that the driving force behind this assumption should be controlled and adjusted based on an accurate guess about the average memory availability, in terms of size, as part of the final setup. Keep in mind that using memory with a soft and reasonable limit, when enough memory is available, can boost performance. Algorithms designed for memory-constrained environments are intrinsically slower, and you should be careful about this trap.
In the upcoming sections, we will cover some techniques that can help us to collect some wasted memory or to use less memory in memory-constrained environments.
Packed Structures
One of the easiest ways to use less memory is to use packed structures. Packed structures discard the memory alignment and they have a more compact memory layout for storing their fields.
Using packed structures is a trade-off. You consume less memory because you discard memory alignments and eventually end up with more memory read time while loading a structure variable. This will result in a slower program. This method is simple but not recommended for all programs.
Compression
This is an effective technique, especially for programs working with a lot of textual data that should be kept inside the memory. Textual data has a high compression ratio in comparison to binary data. This technique allows a program to store the compressed form instead of the actual text data with a huge memory return.
However, saving memory is not free; since compression algorithms are CPU-bound and computation-intensive, the program would have worse performance in the end. This method is ideal for programs that keep textual data that is not required often; otherwise, a lot of compression/decompression operations are needed, and the program would be almost unusable eventually.
External Data Storage
Using external data storage in the forms of a network service, a cloud infrastructure, or simply a hard drive is a very common and useful technique for resolving low memory issues. Since it is usually considered that a program might be run in a limited or low memory environment, there are a lot of examples that use this method to be able to consume less memory even in environments in which enough memory is available.
This technique usually assumes that memory is not the main storage, but it acts as cache memory. Another assumption is that we cannot keep the whole data in the memory and at any moment, only a portion of data or a page of data can be loaded into the memory.
These algorithms are not directly addressing the low memory problem, but they are trying to solve another issue: slow external data storage. External data storage is always too slow in comparison to the main memory. So, the algorithms should balance the reads from the external data store and their internal memory. All database services, such as PostgreSQL and Oracle, use this technique.
In most projects, it is not very wise to design and write these algorithms from scratch because these algorithms are not that trivial and simple to write. The teams behind famous libraries such as SQLite have been fixing bugs for years. If you need to access an external data storage such as a file, a database, or a host on the network while having a low memory footprint, there are always options out there for you.
Summary
In this article, we briefly discussed memory-constrained environments, and we saw how memory tuning can be done in these environments. Extreme C will act as a high-intensity guide to the most advanced capabilities of C for programmers to push their limits.
Kamran Amini is an expert software architect with more than 10 years of experience in the analysis, design, development, and building large-scale, distributed enterprise software systems. His skills are not limited to a specific development platform and Kamran’s architectural solutions include a variety of technologies, patterns, and concepts based on C and C++, Java, Python etc. His passion towards C and C++ has started since his teenage as a lead for his high school’s soccer simulation team and he’s just put it to be his main axis in the career. Recently, blockchain and cryptocurrencies have been the target of his research and interest and because of his deep knowledge about classic cryptography and PKI, working on the expansion of the future possible usages and alternative blockchains are among his interests.
Great post. keep share such an amazing post. For any academic assistance hire our student assignment help experts and forget your worries. best essay writing service
ReplyDelete
ReplyDeleteSuch a wonderful post and it is very unique content. Thank you for your excellent post...
Virginia Spousal Support Calculator
Spousal Support in VA
Such a useful post with unique content
ReplyDeleteAbogados Divorcio Culpeper VA
Abogados Divorcio Spotsylvania VA