I am writing a cross platform shared library in C. The workflow of this library will be like,
lib * handle = lib_init (); Results = lib_do_work (handle); Lib_destroy (handle); Usually, the user will start it on their application, and close it when the application is closed. lib_do_work () is often called multiple times at one time, so to avoid memory allocation and deallocation for each call, I am using a pooling mechanism. With this, I ask the pool to return an example of the structure that I want. The pool will return an unused example or make a new example. If nothing is free, this new example will also be added to the pool so that it can be used the next time. An API call function in my library starts with the call reset_pool () , which makes all the elements in the pool reusable. This pool lib_destroy () As part of the call is destroyed in my tests, I noticed that sometimes 100000+ examples of examples of structure in my pool are happening. Am I thinking that this is a good practice in handling memory? Any help would be great.
I do not know if this is greater for your existing architecture, but usually a pool deposit Limit the number of examples and queue requests made when all submitted examples are busy.
Comments
Post a Comment