Hi All,

Apart from my code posted in this mailchain, I tried testing using the
OpenSSL commands. I ran *openssl dgst -sha256 Test_blob.* Test_blob and all
files mentioned below are almost 44 MB (or more).

The first time buff/cache value increased by 44MB (size of the file)
*                    total        used           free        shared
buff/cache   available*
*Mem:         252180       12984      181544         284       57652
 231188*
*Swap:                  0           0           0*

I ran the same OpenSSL command again with the same file, and the result of
free command remained the same
*                    total        used           free        shared
buff/cache   available*
*Mem:         252180       12984      181544         284       57652
231188*
*Swap:                  0           0           0*

Next I ran the same command with a different file (let's say Test_blob2)
and ran the free command after it, result:-
   *                      total        used        free        s**hared
buff/cache   available*
*Mem:            252180       12948      137916      284      101316
 231200*
*Swap:                    0           0           0*

The *buff/cache* value has increased by the size of the file concerned* (almost
44MB)*
If I run the same command the 3rd time with another file not previously
used (let's say Test_blob3), the following happens

*Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b*
*Rebooting in 15 seconds..*

Is there a way to resolve this problem, How do I clear the buff/cache?

On Sun, 24 Feb 2019 at 03:15, Georg Höllrigl <georg.hoellr...@gmx.at> wrote:

> Hello,
>
>
>
> I guess you’re not seeing a memory leak, but just normal behaviour of
> linux file system cache.
>
> Buff/cache is keeping files in memory that were least accessed as long as
> not needed by other stuff.
>
> You don’t need to free the buffer/cache, because linux does that
> automatically, when memory is needed.
>
>
>
> Kind Regards,
>
> Georg
>
>
>
> *Von:* openssl-users <openssl-users-boun...@openssl.org> *Im Auftrag von 
> *prithiraj
> das
> *Gesendet:* 23 February 2019 18:25
> *An:* Jordan Brown <open...@jordan.maileater.net>
> *Cc:* openssl-users@openssl.org
> *Betreff:* Re: OpenSSL hash memory leak
>
>
>
> Hi,
>
> This is how I have initialized my variables:-
>
>
>
> EVP_MD_CTX *mdctx;
>
> const EVP_MD *md;
>
> int i;
>
> HASH hash_data;
>
> unsigned char message_data[BUFFER_SIZE];
>
>
>
> BUFFER_SIZE has been defined as 131072
>
> and HASH is my hash structure defined to hold the message digest, message
> digest length and message digest type
>
>
>
> On Sat, 23 Feb 2019 at 00:17, Jordan Brown <open...@jordan.maileater.net>
> wrote:
>
> The most obvious question is "how are you allocating your message_data
> buffer?".  You don't show that.
>
>
>
> On 2/22/2019 2:27 AM, prithiraj das wrote:
>
>
>
> Hi All,
>
>
>
> Using OpenSSL 1.0.2g, I have written a code to generate the hash of a file
> in an embeddded device having linux OS and low memory capacity and the
> files are generally of size 44 MB or more. The first time or even the
> second time on some occasions, the hash of any file is successfully
> generated. On the 3rd or 4th time (possibly due to lack of memory/memory
> leak), the system reboots before the hash can be generated.  After restart,
> the same thing happens when the previous steps are repeated.
>
> The stats below shows the memory usage before and after computing the
> hash.
>
>
>
> *root@at91sam9m10g45ek:~# free*
>
> *                      total        used          free         shared
> buff/cache   available*
>
> *Mem:         252180       13272      223048         280          15860
>       230924*
>
> *Swap:                0           0               0*
>
>
>
> *After computing hash :-*
>
> *root@at91sam9m10g45ek:~# free*
>
> *                      total        used          free       shared
> buff/cache   available*
>
> *Mem:         252180       13308      179308        280       59564
>    230868*
>
> *Swap:             0                0              0*
>
>
>
> Buff/cache increases by almost 44MB (same as file size) everytime I
> generate the hash and free decreases. I believe the file is being loaded
> into buffer and not being freed.
>
>
>
> I am using the below code to compute the message digest. This code is part
> of a function ComputeHash and the file pointer here is fph.
>
>
>
>   * EVP_add_digest(EVP_sha256());*
>
> * md = EVP_get_digestbyname("sha256");*
>
>
>
> * if(!md) {*
>
> *        printf("Unknown message digest \n");*
>
> *        exit(1);*
>
> * }*
>
> * printf("Message digest algorithm successfully loaded\n");*
>
> * mdctx = EVP_MD_CTX_create();*
>
> * EVP_DigestInit_ex(mdctx, md, NULL);*
>
>
>
> * // Reading data to array of unsigned chars *
>
> * long long int bytes_read = 0;*
>
>
>
> * printf("FILE size of the file to be hashed is %ld",filesize); *
>
>
>
> * //reading image file in chunks below and fph is the file pointer to the
> 44MB file*
>
> * while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)*
>
> *  EVP_DigestUpdate(mdctx, message_data, bytes_read);*
>
> * EVP_DigestFinal_ex(mdctx, hash_data.md_value, &hash_data.md_len);*
>
> * printf("\n%d\n",EVP_MD_CTX_size(mdctx));*
>
> * printf("\n%d\n",EVP_MD_CTX_type(mdctx));*
>
> * hash_data.md_type=EVP_MD_CTX_type(mdctx);*
>
> * EVP_MD_CTX_destroy(mdctx);*
>
> * //fclose(fp);*
>
> * printf("Generated Digest is:\n ");*
>
> * for(i = 0; i < hash_data.md_len; i++)*
>
> *        printf("%02x", hash_data.md_value[i]);*
>
> * printf("\n");*
>
> * EVP_cleanup();*
>
> *         return hash_data;*
>
>
>
> In the the code below, I have done fclose(fp)
>
> *verify_hash=ComputeHash(fp,size1);*
>
> *fclose(fp);*
>
>
>
> I believe that instead of loading the entire file all at once I am reading
> the 44MB file in chunks and computing the hash using the piece of code
> below: (fph is the file pointer)
>
> *while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)*
>
> *  EVP_DigestUpdate(mdctx, message_data, bytes_read);*
>
>
>
> Where I am going wrong? How can I free the buff/cache after computation of
> message digest?  Please suggest ways to tackle this.
>
>
>
>
>
> Thanks and Regards,
>
> Prithiraj
>
>
>
>
>
> --
>
> Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris
>
>

Reply via email to