Throughput increase by LACP bonding

2012-07-03 Thread Madhusudhana U
Hi all, I am trying to increase the throughput in the cluster by enabling LACP on both in clients and all ceph cluster nodes. Each client and ceph nodes has two 1G Ethernet interface which I want to aggregate and make it 2G. LACP has been configured at switch side too. But even after the

Re: NFS re-exporting CEPH cluster

2012-05-29 Thread madhusudhana U
Greg Farnum greg at inktank.com writes: Have you tried something and it failed? Or are you looking for suggestions? If the former, please report the failure. :) If the latter: http://ceph.com/wiki/Re-exporting_NFS -Greg Greg, I have tried the link. But, my production build (t_make) is

Re: Huge MDS log crashing the cluster

2012-05-23 Thread Madhusudhana U
Tommi Virtanen tv at inktank.com writes: The default logrotate script installed by ceph.deb rotates log files daily and preserves 7 days of logs. If your /var is tiny, or you have heavy debugging turned on, you probably need to rotate more often and retain fewer log files. Or, if you're not

NFS re-exporting CEPH cluster

2012-05-23 Thread Madhusudhana U
Hi all, Can anyone tried re-exporting CEPH cluster via NFS with success (I mean to say, mount the CEPH cluster in one of the machine and then export that via NFS to clients)? I need to do this bcz of my client kernel version and some EDA tools compatibility.Can someone suggest me how I can

Replication at file/folder filen

2012-05-21 Thread Madhusudhana U
Hi all, I assume in CEPH, by default, replications are set for both data and metadata. Is it possible for setting replication for individual file/folders ? I find this very usefule. In most of the cases, we may need more protections for just few files/folders. Instead of setting replication

Re: issue with mounting ceph cluster over NFS

2012-05-16 Thread madhusudhana U
Is only one MDS active? Yes only one MDS is active Does the build work on ceph-fuse without nfs? I can't run build without NFS on ceph-fuse because, the build runs on a cluster (we use LSF for it) where each machine mounts the directory. The build is spit into many small jobs [around 800 in

issue with mounting ceph cluster over NFS

2012-05-15 Thread madhusudhana U
Hi, I have a ceph cluster with 5 nodes, in which 2 are MDS, 3 are MON and all 5 acts as OSD. I have mounted the ceph cluster in one node in the cluster and exported the mounted dir via NFS. Below is my mount and exports file looks like ceph-fuse on /ceph_cluster type fuse.ceph-fuse

SAS disks for OSD's

2012-04-23 Thread Madhusudhana U
Hi all, Is any performance benefit we will get using SAS storage for OSD instead of SATA storage ? Is any one using SAS drives which is producing good performance? Or I can use SAS drives for journal and SATA storage for OSD? Which one in above two scenario would yield a better performance?

Upgrade ceph to 0.45 version

2012-04-18 Thread Madhusudhana U
Hi all, I am currently running ceph ver 0.41 in my cluster and i would like to upgrade it to 0.45 version. Can someone put lights on the procedure to upgrade the cluster with/without destroying data. Thanks __Madhusudhana -- To unsubscribe from this list: send the line unsubscribe ceph-devel

Partition for OSD journal

2012-04-12 Thread Madhusudhana U
Hi all, I read in the wiki that, to have a better performance compared to file under OSD data dir, we can have a seperate partition for OSD journal. can someone suggest me, how i can mention this in ceph.conf file ? I have below lines in my ceph.conf file. Is this means I already have a partition

How to set MDS log size

2012-04-11 Thread Madhusudhana U
Hi all, In my MDS system of ceph cluster, my entire root partition is full bcz of one big mds log file [root@ceph-node-7 ceph]# du -sh * 0 mds.admin.log 27G mds.ceph-node-7.log [root@ceph-node-7 ceph]# df -h FilesystemSize Used Avail Use% Mounted on /dev/sda2