list,
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs is much
more friendly...
We have looked into lessfs, sdfs and ddar
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of
Dean Jones
Sent: Monday, August 27, 2012 11:45 AM
To: CentOS mailing list
Subject: Re: [CentOS] Deduplication data for CentOS?
Deduplication with ZFS takes a lot of RAM.
I would not yet
On Thu, Sep 13, 2012 at 12:06 PM, Ryan Palamara
ryan.palam...@zaisgroup.com wrote:
The better option for ZFS would be to get a SSD and move the dedupe table
onto that drive instead of having it in RAM, because it can become massive.
What's 'massive' in dollars these days?
--
Les Mikesell
] On Behalf Of
Les Mikesell
Sent: Thursday, September 13, 2012 3:09 PM
To: CentOS mailing list
Subject: Re: [CentOS] Deduplication data for CentOS?
On Thu, Sep 13, 2012 at 12:06 PM, Ryan Palamara ryan.palam...@zaisgroup.com
wrote:
The better option for ZFS would be to get a SSD and move the dedupe
Rainer Traut tr.ml@... writes:
Hi list,
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs is much
more friendly...
We have looked
Am 28.08.2012 21:26, schrieb Les Mikesell:
On Tue, Aug 28, 2012 at 2:04 PM, John R Pierce pie...@hogranch.com wrote:
On 08/28/12 11:41 AM, Les Mikesell wrote:
On Tue, Aug 28, 2012 at 3:03 AM, Rainer Trauttr...@gmx.de wrote:
Rsync is of no use for us. We have mainly big Domino .nsf files
On 08/29/12 2:43 AM, Rainer Traut wrote:
Yes, there is commercial software to do incremental backups but I do not
know of commandline options to do this. Maybe anyone?
Les is right, I stop the server, take the snapshot, start the server and
do the xdelta on the snapshot NSF files.
Having
Am 27.08.2012 16:04, schrieb Janne Snabb:
On 08/27/2012 07:23 PM, Rainer Traut wrote:
Yeah I know it has this feature, but is there a working zfs
implementation for linux?
I have heard some positive feedback about http://zfsonlinux.org/ but I
have not had time to test myself yet. It
Am 27.08.2012 18:04, schrieb Les Mikesell:
On Mon, Aug 27, 2012 at 6:55 AM, Rainer Traut tr...@gmx.de wrote:
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having
Am 27.08.2012 22:55, schrieb Adam Tauno Williams:
On Mon, 2012-08-27 at 14:32 -0400, Brian Mathis wrote:
On Mon, Aug 27, 2012 at 7:55 AM, Rainer Traut tr...@gmx.de wrote:
We have looked into lessfs, sdfs and ddar.
Are these filesystems ready to use (on centos)?
ddar is sthg different, I know.
Sorry for the top posting.
Dedup is just a hype. After a while the table that manage the deduped data
will be just too big. Don't use it for long term.
Sent from Samsung Galaxy ^^
___
CentOS mailing list
CentOS@centos.org
On 08/28/12 12:58 AM, Rainer Traut wrote:
The website looks promising. They are using a thing called SPL,
Sun/Solaris Porting Layer to be able to use the Solaris ZFS code.
But there is no more OpenSolaris, isn't it? Means they have to stay with
the ZFS code from when it was open?
opensolaris
On 08/28/12 1:03 AM, Rainer Traut wrote:
Rsync is of no use for us. We have mainly big Domino .nsf files which
only change slightly. So rsync would not be able to make many hardlinks. :)
so you need block level dedup? good luck with that. never seen a
scheme yet that wasn't full of issues
Am 28.08.2012 um 10:03 schrieb Rainer Traut:
Rsync is of no use for us. We have mainly big Domino .nsf files which
only change slightly. So rsync would not be able to make many hardlinks. :)
can this endeavor ensure the consistence of this database files?
--
LF
On Tue, Aug 28, 2012 at 3:03 AM, Rainer Traut tr...@gmx.de wrote:
Rsync is of no use for us. We have mainly big Domino .nsf files which
only change slightly. So rsync would not be able to make many hardlinks. :)
Rdiff-backup might work for this since it stores deltas. Are you
doing
On 08/28/12 11:41 AM, Les Mikesell wrote:
On Tue, Aug 28, 2012 at 3:03 AM, Rainer Trauttr...@gmx.de wrote:
Rsync is of no use for us. We have mainly big Domino .nsf files which
only change slightly. So rsync would not be able to make many hardlinks. :)
Rdiff-backup might work for this
On Tue, Aug 28, 2012 at 2:04 PM, John R Pierce pie...@hogranch.com wrote:
On 08/28/12 11:41 AM, Les Mikesell wrote:
On Tue, Aug 28, 2012 at 3:03 AM, Rainer Trauttr...@gmx.de wrote:
Rsync is of no use for us. We have mainly big Domino .nsf files which
only change slightly. So rsync would
Hi list,
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs is much
more friendly...
We have looked into lessfs, sdfs and ddar
From: Rainer Traut tr...@gmx.de
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs is much
more friendly...
We have looked into lessfs
Am 27.08.2012 14:15, schrieb John Doe:
From: Rainer Traut tr...@gmx.de
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs is much
more
On 08/27/2012 07:23 PM, Rainer Traut wrote:
Yeah I know it has this feature, but is there a working zfs
implementation for linux?
I have heard some positive feedback about http://zfsonlinux.org/ but I
have not had time to test myself yet. It probably depends on your
intended usage. It is a
On 08/27/12 4:55 AM, Rainer Traut wrote:
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs is much
more friendly...
BackupPC does exactly
On Mon, Aug 27, 2012 at 9:23 AM, John R Pierce pie...@hogranch.com wrote:
On 08/27/12 4:55 AM, Rainer Traut wrote:
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having
/12 4:55 AM, Rainer Traut wrote:
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs is much
more friendly...
BackupPC does exactly
Am 27.08.2012 um 16:23 schrieb John R Pierce:
On 08/27/12 4:55 AM, Rainer Traut wrote:
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs
On Mon, Aug 27, 2012 at 6:55 AM, Rainer Traut tr...@gmx.de wrote:
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs is much
more friendly
- Original Message -
From: Rainer Traut tr...@gmx.de
To: centos@centos.org
Sent: Monday, August 27, 2012 4:55:03 AM
Subject: [CentOS] Deduplication data for CentOS?
Hi list,
is there any working solution for deduplication of data for centos?
We are trying to find a solution
On Mon, Aug 27, 2012 at 7:55 AM, Rainer Traut tr...@gmx.de wrote:
Hi list,
is there any working solution for deduplication of data for centos?
We are trying to find a solution for our backup server which runs a bash
script invoking xdelta(3). But having this functionality in fs is much
more
On Mon, 2012-08-27 at 14:32 -0400, Brian Mathis wrote:
On Mon, Aug 27, 2012 at 7:55 AM, Rainer Traut tr...@gmx.de wrote:
We have looked into lessfs, sdfs and ddar.
Are these filesystems ready to use (on centos)?
ddar is sthg different, I know.
This is something I have been thinking about
29 matches
Mail list logo