Hello,
Any updates on when SPC-1 is going to be available as part of fio ?
Thanks,
Kiran.
On Wed, Nov 26, 2014 at 7:23 PM, Michael O'Sullivan
michael.osulli...@auckland.ac.nz wrote:
Hi Luis,
We worked with Jens Axobe for a little bit to try and merge things but then
just got busy testing
BEGIN:VCALENDAR
PRODID:Zimbra-Calendar-Provider
VERSION:2.0
METHOD:REQUEST
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
BEGIN:STANDARD
DTSTART:16010101T00
TZOFFSETTO:+0530
TZOFFSETFROM:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:7b49ac44-ffca-4de4-a179-f9edde80bf1e
SUMMARY:Gluster
BEGIN:VCALENDAR
PRODID:Zimbra-Calendar-Provider
VERSION:2.0
METHOD:CANCEL
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
BEGIN:STANDARD
DTSTART:16010101T00
TZOFFSETTO:+0530
TZOFFSETFROM:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:7b49ac44-ffca-4de4-a179-f9edde80bf1e
SUMMARY:Cancelled:
On 12/21/2014 11:10 PM, David F. Robinson wrote:
So for now it is up to all of the individual users to know they cannot use tar
without the -P switch if they are accessing a data storage system that uses
gluster?
Setting volume option cluster.read-hash-mode to 2 could help here. Can
you
The birthday paradox says that with a 44-bit hash we're more likely than
not to start seeing collisions somewhere around 2^22 directory entries.
That 16-million-entry-directory would have a lot of collisions.
This is really the key point. The risks of the bit-stealing approach
have been
On Mon, Dec 22, 2014 at 09:30:29AM -0500, Jeff Darcy wrote:
The birthday paradox says that with a 44-bit hash we're more likely than
not to start seeing collisions somewhere around 2^22 directory entries.
That 16-million-entry-directory would have a lot of collisions.
This is really the
That did not fix the issue (see below). I also have run into another
possibly related issue. After untarring the boost directory and
compiling the software, I cannot delete the source directory structure.
It says direct not empty.
corvidpost5:temp3/gfs \rm -r boost_1_57_0
rm: cannot remove
An alternative would be to convert directories into regular files from
the brick point of view.
The benefits of this would be:
* d_off would be controlled by gluster, so all bricks would have the
same d_off and order. No need to use any d_off mapping or transformation.
I don't think a
On Mon, Dec 22, 2014 at 09:30:29AM -0500, Jeff Darcy wrote:
By contrast, the failure mode for the map-caching approach - a simple
failure in readdir - is relatively benign. Such failures are also
likely to be less common, even if we adopt the *unprecedented*
requirement that the cache be
On Mon, Dec 22, 2014 at 09:30:29AM -0500, Jeff Darcy wrote:
By contrast, the failure mode for the map-caching approach - a simple
failure in readdir - is relatively benign. Such failures are also
likely to be less common, even if we adopt the *unprecedented*
requirement that the cache be
As a part of our ongoing effort to improve the reliability and robustness
of GlusterD we are also targeting concurrency related issues, and this
proposal is with regard to the mentioned issue.
The Big-lock
GlusterD was originally designed as a single threaded application which
could
Can you please take in http://review.gluster.org/#/c/9328/ for 3.6.2?
~Atin
On 12/19/2014 02:05 PM, Raghavendra Bhat wrote:
Hi,
glusterfs-3.6.2beta1 has been released. I am planning to make 3.6.2
before end of this year. If there are some patches that has to go in for
3.6.2, please send
On Tuesday 23 December 2014 11:09 AM, Atin Mukherjee wrote:
Can you please take in http://review.gluster.org/#/c/9328/ for 3.6.2?
~Atin
On 12/19/2014 02:05 PM, Raghavendra Bhat wrote:
Hi,
glusterfs-3.6.2beta1 has been released. I am planning to make 3.6.2
before end of this year. If there
13 matches
Mail list logo