On Thu, May 21, 2015 at 09:05:36PM +0200, gianpietro.se...@unipd.it wrote:
> > On Wed, May 13, 2015 at 07:38:03PM +, gianpietro sella wrote:
> >> J. Bruce Fields fieldses.org> writes:
> >>
> >> >
> >> > On Wed, May 13, 2015 at 01:06:17PM
On Wed, May 13, 2015 at 07:38:03PM +, gianpietro sella wrote:
> J. Bruce Fields fieldses.org> writes:
>
> >
> > On Wed, May 13, 2015 at 01:06:17PM +0200, sella gianpietro wrote:
> > > this is the inodes number in the exported folder of the volume
> >
On Wed, May 13, 2015 at 01:06:17PM +0200, sella gianpietro wrote:
> this is the inodes number in the exported folder of the volume
> in the server before write file in the client:
>
> [root@cld-blu-13 nova]# du --inodes
> 2 .
>
> this is the used block:
>
> [root@cld-blu-13 nova]# df -T
>
On Wed, May 13, 2015 at 11:38:51AM +, Cao, Vinh wrote:
> Sounds like the process that has the file create while you are moving
> it to another node still open.
If I understand correctly, the filesystem is still unmountable. If a
process held a file on the filesystem open, an unmount attempt w
On Tue, May 12, 2015 at 12:37:10AM +0200, gianpietro.se...@unipd.it wrote:
> > On Sun, May 10, 2015 at 11:28:25AM +0200, gianpietro.se...@unipd.it wrote:
> >> Hi, sorry for my bad english.
> >> I testing nfs cluster active/passsive (2 nodes).
> >> I use the next instruction for nfs:
> >>
> >> https
On Sun, May 10, 2015 at 11:28:25AM +0200, gianpietro.se...@unipd.it wrote:
> Hi, sorry for my bad english.
> I testing nfs cluster active/passsive (2 nodes).
> I use the next instruction for nfs:
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Ad
On Tue, Oct 22, 2013 at 09:25:40AM +, Colin Simpson wrote:
> On Tue, 2013-10-22 at 10:24 +0200, Lars Marowsky-Bree wrote:
> > On 2013-10-21T15:58:18, Alan Brown wrote:
> >
> > > As anyone who's tried to use kernel NFS in a clustered environment knows,
> > > it's fraught with issues which risk
On Tue, Sep 24, 2013 at 11:29:07AM +0200, Olivier Desport wrote:
> Hello,
>
> I've installed a two nodes GFS2 cluster on Debian 7.
What kernel is that?
--b.
> The nodes are
> connected to the datas by iSCSI and multipathing with a 10 Gb/s
> link. I can write a 1g file with dd at 500 Mbytes/s. I
On Mon, Jul 11, 2011 at 11:43:58AM +0100, Steven Whitehouse wrote:
> Hi,
>
> On Mon, 2011-07-11 at 09:30 +0100, Alan Brown wrote:
> > On 08/07/11 22:09, J. Bruce Fields wrote:
> >
> > > With default mount options, the linux NFS client (like most NFS clients)
> &
On Fri, Jul 08, 2011 at 06:36:53PM +0100, Alan Brown wrote:
> On Fri, 8 Jul 2011, Colin Simpson wrote:
>
> > That's not ideal either when Samba isn't too happy working over NFS, and
> > that is not recommended by the Samba people as being a sensible config.
>
> I know but there's a real (and demo
On Fri, Jul 11, 2008 at 11:33:08PM -0400, bfields wrote:
> On Fri, Jul 11, 2008 at 07:25:29PM -0400, bfields wrote:
> > On Thu, Jul 10, 2008 at 02:27:14PM +0100, Steven Whitehouse wrote:
> > > a packet thats supposedly from .129 except that its mac address is now
> > > 0:ff:1d:e9:b9:a3. So it looks
On Fri, Jul 11, 2008 at 07:25:29PM -0400, bfields wrote:
> On Thu, Jul 10, 2008 at 02:27:14PM +0100, Steven Whitehouse wrote:
> > a packet thats supposedly from .129 except that its mac address is now
> > 0:ff:1d:e9:b9:a3. So it looks like the .129 address might be configured
> > on two different n
On Thu, Jul 10, 2008 at 02:27:14PM +0100, Steven Whitehouse wrote:
> Hi,
>
> On Wed, 2008-07-09 at 12:32 -0400, J. Bruce Fields wrote:
> > On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote:
> > > J. Bruce Fields wrote:
> > >> On Wed, Jul 09,
On Thu, Jul 10, 2008 at 10:26:54AM +0100, Christine Caulfield wrote:
> J. Bruce Fields wrote:
>> On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote:
>>> J. Bruce Fields wrote:
>>>> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote:
On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote:
> J. Bruce Fields wrote:
>> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote:
>>> Steven Whitehouse wrote:
>>>> Hi,
>>>>
>>>> On Tue, 2008-07-08 at 18:15 -0
On Wed, Jul 09, 2008 at 09:44:24AM +0100, Steven Whitehouse wrote:
> Hi,
>
> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
> > On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
> > > On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
> &g
On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote:
> Steven Whitehouse wrote:
>> Hi,
>>
>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
>>>> On Mon, Jul 07, 2008
On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
> > On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
> > > - write(control_fd, in, sizeof(struct gdlm_plock_info));
> > > + write(co
On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
> > - write(control_fd, in, sizeof(struct gdlm_plock_info));
> > + write(control_fd, in, sizeof(struct dlm_plock_info));
>
> Gah, sorry, I ke
On Fri, Jun 27, 2008 at 01:41:17PM -0500, David Teigland wrote:
> On Fri, Jun 27, 2008 at 01:28:56PM -0400, david m. richter wrote:
> > i also have another setup in vmware; while i doubt it's
> > substantively different than bruce's, i'm a ready and willing tester. is
> > there a different b
On Fri, Jun 27, 2008 at 12:18:45PM -0500, David Teigland wrote:
> On Thu, Jun 26, 2008 at 05:10:52PM -0400, J. Bruce Fields wrote:
> > > > So, the first mount (on "piglet1") succeeds. The second (on "piglet2")
> > > > returns immediatel
On Thu, Jun 26, 2008 at 04:33:15PM -0400, bfields wrote:
> On Thu, Jun 26, 2008 at 03:11:06PM -0400, bfields wrote:
> > On Thu, Jun 26, 2008 at 02:35:29PM -0400, bfields wrote:
> > > On Thu, Jun 26, 2008 at 10:27:33AM -0500, David Teigland wrote:
> > > > This mount appears to have been successful.
On Thu, Jun 26, 2008 at 03:11:06PM -0400, bfields wrote:
> On Thu, Jun 26, 2008 at 02:35:29PM -0400, bfields wrote:
> > On Thu, Jun 26, 2008 at 10:27:33AM -0500, David Teigland wrote:
> > > This mount appears to have been successful. Usual things to collect for
> > > debugging the other problems:
On Thu, Jun 26, 2008 at 02:35:29PM -0400, bfields wrote:
> On Thu, Jun 26, 2008 at 10:27:33AM -0500, David Teigland wrote:
> > This mount appears to have been successful. Usual things to collect for
> > debugging the other problems:
> > - any errors in /var/log/messages from all nodes
> > - cman_t
On Thu, Jun 26, 2008 at 10:27:33AM -0500, David Teigland wrote:
> On Wed, Jun 25, 2008 at 06:45:44PM -0400, J. Bruce Fields wrote:
> > I'm trying to get a gfs2 file system running on some kvm hosts, using an
> > ordinary qemu disk for the shared storage (is there any reason
On Thu, Jun 26, 2008 at 02:56:10PM +0100, Steven Whitehouse wrote:
> Hi,
>
> On Wed, 2008-06-25 at 18:45 -0400, J. Bruce Fields wrote:
> > I'm trying to get a gfs2 file system running on some kvm hosts, using an
> > ordinary qemu disk for the shared storage (is there
I'm trying to get a gfs2 file system running on some kvm hosts, using an
ordinary qemu disk for the shared storage (is there any reason this
can't work?).
I installed openais80.3 from source (after modifying Makefile so "make
install" would install to /), and installed gfs2 from the STABLE2 branch
On Tue, Aug 21, 2007 at 03:45:22PM +0200, kieran JOYEUX wrote:
> For the moment the two nfs share's content are the same. I use scp to copy
> them. It's static so i did it just once.
So, you just did something like this?:
scp -r /shares/someshare/ otherserver:/shares/
> To resolve that
On Tue, Aug 21, 2007 at 01:02:04PM +0200, kieran JOYEUX wrote:
> I don't have any replication systems. I don't really need it, rsync would
> be ok. All i want is not having that NFS stale handle error... With
> heartbeat + DRBD i have no issues about it.
Are you using rsync to copy the files in
29 matches
Mail list logo