Divide the image into 1000x1000 grid. Compute and store hash of each
individual cell. Now, compare hashes of cells instead. Assuming each
hash is 16 bytes, it takes additional ~ 16 MB of memory, but the time
required to localize the point of change is reduced by a factor of
10^6.
To reduce the
use consistency hashing
see chord
On Fri, Jul 15, 2011 at 3:22 AM, DK divyekap...@gmail.com wrote:
@Sagar: And how would you resolve hash collisions?
--
DK
--
You received this message because you are subscribed to the Google Groups
Algorithm Geeks group.
To view this discussion on the
oh common take HASH of the image whuch need less space compare to
original one
:)
On Wed, Jul 13, 2011 at 1:01 AM, DK divyekap...@gmail.com wrote:
@Santosh: It depends on the type of the image. An image cannot be
represented in a (lossy) compressed format using fourier or wavelet
@Sagar: And how would you resolve hash collisions?
--
DK
--
You received this message because you are subscribed to the Google Groups
Algorithm Geeks group.
To view this discussion on the web visit
https://groups.google.com/d/msg/algogeeks/-/wmjP3AkyfocJ.
To post to this group, send email to
too store this whole image we need 2^70==10^21 bytes size of disk.
assuming each pixel take 1 byte . always go metadata for large
entities .here metadata includes type(e.g jpeg,bmp,gif
etc),size,name,pointer to log file entry etc.
Shashank
CSE,BIT Mesra
--
You received this message because you
refer Fourier transform and wavelet transform of image
On Tue, Jul 12, 2011 at 7:38 PM, bittu shashank7andr...@gmail.com wrote:
too store this whole image we need 2^70==10^21 bytes size of disk.
assuming each pixel take 1 byte . always go metadata for large
entities .here metadata includes