Very possible. They were off by about 5 minute. I was able to connect back to
MDS after I installed NTP on all of them. This happened at wake on client
(including installation of NTP on all nodes).
[137537.630016] libceph: mds0 192.168.1.20:6803 socket closed (con state
NEGOTIATING)
[137707.
Hi,
I have successfully build up my testing ceph cluster with
radosgweverything is working fine...but i wondered of directory listing
of radosgwi tried all the way(dismod,-Indexes) to disable directory
listing from apache web server but no luck
Can you people suggest me the way i can
On 03/25/2014 06:30 PM, Andrei Mikhailovsky wrote:
Mark, thanks for your reply.
I've tried increasing the readahead values to 4M (8192) for all osds as well as
all vm block devices. This did not change performance much, the osd values are
still in the region of 30 MB/s.
All vms are using virt
Hi Greg,
Restarting the actual service ie: service ceph restart osd.50, only takes a few
seconds.
Attached is a ceph -w of just running a service ceph restart osd.50,
You can see it marks itself down pretty much straight away. Takes a little
while to mark itself as up and finish "recovery"
I
On Tue, 25 Mar 2014, Michael Nelson wrote:
On Tue, 25 Mar 2014, Michael Nelson wrote:
I am setting up a new test cluster on 0.78 using the same configuration
that
was successful on 0.72. After creating a new S3 account, a simple operation
of listing buckets (which will be empty obviously) is
... and it was, bit of a foot gun there. Thanks Greg, you were right on
the money.
So it looks like my 'make install' puts the libraries in /usr/lib (as
expected):
$ ls -l /usr/lib/librados.so.2.0.0
-rwxr-xr-x 1 root root 92296482 Mar 26 12:57 /usr/lib/librados.so.2.0.0
whereas the Ubuntu p
I see I have librbd1 and librados2 at 0.72.2 (due to having qemu
installed on this machine). That could be the source of the problem,
I'll see if I can update them (I have pending updates I think), and
report back.
Cheers
Mark
On 26/03/14 12:23, Mark Kirkwood wrote:
Yeah, it seems possible,
Hi,
I finding how to upgrade my current ceph (v.72.2 emperor) to v0.8, I searched
on Internet and to found in website ceph.com, but I haven't seen any guide yet.
please show me how to upgrade or give some resources about how to perform.
Best retgards,
Thanh Tran
Hi,
I finding how to upgrade my current ceph (v.72.2 emperor) to v0.8, I
searched on Internet and to found in website ceph.com, but I haven't seen
any guide yet. please show me how to upgrade or give some resources about
how to perform.
Best retgards,
Thanh Tran
___
Mark, thanks for your reply.
I've tried increasing the readahead values to 4M (8192) for all osds as well as
all vm block devices. This did not change performance much, the osd values are
still in the region of 30 MB/s.
All vms are using virtio drivers and I've got 3.8.0 kernel on the osd serv
Yeah, it seems possible, however I'm installing 'em all the same way (in
particular the 0.77 that works and the 0.77 that does not). The method is:
$ ./autogen.sh
$ ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var
--with-radosgw
$ time make
$ sudo make install
[probablly not r
On 26/03/14 04:05, Andrey Korolyov wrote:
> Actually will commit all the bytes. If one prefer to keep discarded
> places, it is necessary to throw out the copy to the filesystem(or
> implement chunk-reader pipe for rbd client).
True, but it had the advantage that I didn't have to try and restore a
hello
one of my osd refuses to start, it wouldn't be a problem if it
wouldn't have unreplicated data on it :(
when i launch it manually i got this trace
|ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
1: /usr/bin/ceph-osd() [0x99b742]
2: (()+0xf030) [0x7f97e1a06030]
3: (gs
Well since I spammed the list earlier, I should fess up to my mistakes. I
forgot to change MTU sizes on the 10G switch after I switched to jumbo
frames. So yes, I had a very unhappy networking stack.
On the upside, playing with Cumulus Linux on switches is fun.
On Tue, Mar 25, 2014 at 1:12 PM,
On 03/25/2014 02:08 PM, Graeme Lambert wrote:
> Hi Stuart,
>
> If this helps, these three lines will do it for you. I'm sure you could
> rustle up a script to go through all of your images and do this for you.
>
> rbd export libvirt-pool/my-server - | rbd import --image-format 2 -
> libvirt-pool
On 03/25/2014 02:08 PM, Graeme Lambert wrote:
> Hi Stuart,
>
> If this helps, these three lines will do it for you. I'm sure you could
> rustle up a script to go through all of your images and do this for you.
>
> rbd export libvirt-pool/my-server - | rbd import --image-format 2 -
> libvirt-pool
On Tue, Mar 25, 2014 at 9:56 AM, hjcho616 wrote:
> I am merely putting the client to sleep and waking it up. When it is up,
> running ls on the mounted directory. As far as I am concerned at very high
> level I am doing the same thing. All are running 3.13 kernel Debian
> provided.
>
> When tha
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi Mark
On 25/03/14 00:37, Mark Kirkwood wrote:
> I'm redeploying my development cluster after building 0.78 from src
> on Ubuntu 14.04. Ceph version is ceph version 0.78-325-ge5a4f5e.
Just so you know, 0.78 is now in Ubuntu 14.04; it should be fol
So I don't remember exactly the relationships, but /usr/bin/ceph is a
bit of python wrapping some libraries. I think it should be getting
the version number from the right place, but wonder if some of them
aren't being updated appropriately. How are you installing these
binaries?
In particular, tha
I've wanted to try hacking a Pogoplug ~($12 on eBay with free shipping
right now, as I just saw on Dealigg this week) to become an OSD (inserting
one 3TB disk). I believe all of these have a SATA port, though the amount
of RAM varies. These lesser ARM processors are pretty slow for this kind
of w
Thanks for the feedback -- I'll post back with more detailed logs if
anything looks fishy!
On Tue, Mar 25, 2014 at 1:10 PM, Gregory Farnum wrote:
> Well, you could try running with messenger debugging cranked all the
> way up and see if there's something odd happening there (eg, not
> handling
Well, you could try running with messenger debugging cranked all the
way up and see if there's something odd happening there (eg, not
handling incoming messages), but based on not having any other reports
of this, I think your networking stack is unhappy in some way. *shrug*
(Higher log levels show
On 03/25/2014 10:49 AM, Loic Dachary wrote:
> Hi,
>
> It's not available yet but ... are we far away ?
It's a pity Pi doesn't do SATA. Otherwise all you'd need's a working arm
port and some scripting...
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.ed
On 25/03/2014 17:11, Olivier Bonvalet wrote:
> Hi,
>
> not sure it's related to ceph... you should probably look at ownClound
> project, no ?
>
> Or use any S3/Swift client which will know how to exchange data with a
> RADOS gateway.
Owncloud could be one way to use this. But you would have to
On Tue, Mar 25, 2014 at 12:53 PM, Gregory Farnum wrote:
> On Tue, Mar 25, 2014 at 9:24 AM, Travis Rhoden wrote:
> > Okay, last one until I get some guidance. Sorry for the spam, but
> wanted to
> > paint a full picture. Here are debug logs from all three mons, capturing
> > what looks like an
I am merely putting the client to sleep and waking it up. When it is up,
running ls on the mounted directory. As far as I am concerned at very high
level I am doing the same thing. All are running 3.13 kernel Debian provided.
When that infinite loop of decrypt error happened, I waited about 1
On Tue, Mar 25, 2014 at 9:24 AM, Travis Rhoden wrote:
> Okay, last one until I get some guidance. Sorry for the spam, but wanted to
> paint a full picture. Here are debug logs from all three mons, capturing
> what looks like an election sequence to me:
>
> ceph0:
> 2014-03-25 16:17:24.324846 7fa
On Tue, Mar 25, 2014 at 12:28 PM, Alexandre DERUMIER wrote:
> Hi, can you ping between your hosts ?
>
I can. And, from my first post, I can do things like "telnet 10.10.30.1
6789" from 10.10.30.0 to see that I can actually reach the monitor sockets.
> (just to be sure, what is your netmask ? (
Sounds like you want Plex (if sharing media is your only goal) or
http://www.filetransporter.com/ (for general purpose cloud file sharing).
Perhaps paired with a Drobo for easy expandability and redundancy. Not
sure where Ceph fits in that picture as it is currently much more complexŠ
On 3/25/14
Hi, can you ping between your hosts ?
(just to be sure, what is your netmask ? (as I see 10.10.30.0 for mon1)
- Mail original -
De: "Travis Rhoden"
À: "ceph-users"
Envoyé: Mardi 25 Mars 2014 17:24:19
Objet: Re: [ceph-users] Monitors stuck in "electing"
Okay, last one until I ge
Okay, last one until I get some guidance. Sorry for the spam, but wanted
to paint a full picture. Here are debug logs from all three mons,
capturing what looks like an election sequence to me:
ceph0:
2014-03-25 16:17:24.324846 7fa5c53fc700 5 mon.ceph0@0(electing).elector(35)
start -- can i be l
Hi,
not sure it's related to ceph... you should probably look at ownClound
project, no ?
Or use any S3/Swift client which will know how to exchange data with a
RADOS gateway.
Le mardi 25 mars 2014 à 16:49 +0100, Loic Dachary a écrit :
> Hi,
>
> It's not available yet but ... are we far away ?
On Mon, Mar 24, 2014 at 6:26 PM, hjcho616 wrote:
> I tried the patch twice. First time, it worked. There was no issue.
> Connected back to MDS and was happily running. All three MDS demons were
> running ok.
>
> Second time though... all three demons were alive. Health was reported OK.
> Howev
I bumped debug mon and debug ms up on one of the monitors (ceph0), and this
is what I see:
2014-03-25 16:02:19.273406 7fa5c53fc700 5 mon.ceph0@0(electing).elector(35)
election timer expired
2014-03-25 16:02:19.273447 7fa5c53fc700 5 mon.ceph0@0(electing).elector(35)
start -- can i be leader?
2014
How long does it take for the OSDs to restart? Are you just issuing a
restart command via upstart/sysvinit/whatever? How many OSDMaps are
generated from the time you issue that command to the time the cluster
is healthy again?
This sounds like an issue we had for a while where OSDs would start
pee
Hi,
It's not available yet but ... are we far away ?
I would like to go to the hardware store and buy a Ceph enabled disk, plug it
to my internet box and use it. I would buy another for my sister and pair it
with mine so we share photos and movies we like. My mail would go there too,
encrypte
Just to emphasize that I don't think it's clock skew, here is the NTP state
of all three monitors:
# ansible ceph_mons -m command -a "ntpq -p" -kK
SSH password:
sudo password [defaults to SSH password]:
ceph0 | success | rc=0 >>
remote refid st t when poll reach delay offse
Hello,
I just deployed a new Emperor cluster using ceph-deploy 1.4. All went very
smooth, until I rebooted all the nodes. After reboot, the monitors no
longer form a quorum.
I followed the troubleshooting steps here:
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/
Specif
On 03/25/2014 07:19 AM, Andrei Mikhailovsky wrote:
Hello guys,
Was hoping someone could help me with strange read performance problems
on osds. I have a test setup of 4 kvm host servers which are running
about 20 test linux vms between them. The vms' images are stored in ceph
cluster and accesse
Hello guys,
Was hoping someone could help me with strange read performance problems on
osds. I have a test setup of 4 kvm host servers which are running about 20 test
linux vms between them. The vms' images are stored in ceph cluster and accessed
via rbd. I also have 2 osd servers with repli
Yep, so works.
2014-03-25 14:45 GMT+04:00 Ilya Dryomov :
> On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов wrote:
> > Hmmm, create another image in another pool. Pool without cache tier.
> >
> > [root@ceph01 cluster]# rbd create test/image --size 102400
> > [root@ceph01 cluster]# rbd -p test ls
On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов wrote:
> Hmmm, create another image in another pool. Pool without cache tier.
>
> [root@ceph01 cluster]# rbd create test/image --size 102400
> [root@ceph01 cluster]# rbd -p test ls -l
> NAME SIZE PARENT FMT PROT LOCK
> image 102400M 1
> [
Thanks, Ilya.
2014-03-25 14:24 GMT+04:00 Ilya Dryomov :
> On Tue, Mar 25, 2014 at 10:14 AM, Ирек Фасихов wrote:
> > I want to create an image in format 2 through cache tier, but get an
> error
> > creating.
> >
> > [root@ceph01 cluster]# rbd create rbd/myimage --size 102400
> --image-format 2
>
On Tue, Mar 25, 2014 at 10:14 AM, Ирек Фасихов wrote:
> I want to create an image in format 2 through cache tier, but get an error
> creating.
>
> [root@ceph01 cluster]# rbd create rbd/myimage --size 102400 --image-format 2
> 2014-03-25 12:03:44.835686 7f668e09d760 1 -- :/0 messenger.start
> 2014
Hi Stuart,
If this helps, these three lines will do it for you. I'm sure you could
rustle up a script to go through all of your images and do this for you.
rbdexportlibvirt-pool/my-server - | rbd import --image-format2-
libvirt-pool/my-server2
rbdrmlibvirt-pool/my-server
rbdmvlibvirt-pool/m
Hmmm, create another image in another pool. Pool without cache tier.
[root@ceph01 cluster]# rbd create test/image --size 102400
[root@ceph01 cluster]# rbd -p test ls -l
NAME SIZE PARENT FMT PROT LOCK
image 102400M 1
[root@ceph01 cluster]# ceph osd dump | grep test
pool 4 'test' replic
On Tue, Mar 25, 2014 at 10:59 AM, Ирек Фасихов wrote:
> Ilya, set "chooseleaf_vary_r 0", but no map rbd images.
>
> [root@ceph01 cluster]# rbd map rbd/tst
> 2014-03-25 12:48:14.318167 7f44717f7760 2 auth: KeyRing::load: loaded key
> file /etc/ceph/ceph.client.admin.keyring
> rbd: add failed: (5)
I also added a new log in Google Drive.
https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing
2014-03-25 12:59 GMT+04:00 Ирек Фасихов :
> Ilya, set "chooseleaf_vary_r 0", but no map rbd images.
>
> [root@ceph01 cluster]# *rbd map rbd/tst*
> 2014-03-25 12:48:14.318167 7
Ilya, set "chooseleaf_vary_r 0", but no map rbd images.
[root@ceph01 cluster]# *rbd map rbd/tst*
2014-03-25 12:48:14.318167 7f44717f7760 2 auth: KeyRing::load: loaded key
file /etc/ceph/ceph.client.admin.keyring
rbd: add failed: (5) Input/output error
[root@ceph01 cluster]# *cat /var/log/message
On Tue, Mar 25, 2014 at 8:38 AM, Ирек Фасихов wrote:
> Hi, Ilya.
>
> I added the files(crushd and osddump) to a folder in GoogleDrive.
>
> https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing
OK, so this has nothing to do with caching. You have chooseleaf_vary_r
set to
I want to create an image in format 2 through cache tier, but get an error
creating.
[root@ceph01 cluster]# *rbd create rbd/myimage --size 102400 --image-format
2*
2014-03-25 12:03:44.835686 7f668e09d760 1 -- :/0 messenger.start
2014-03-25 12:03:44.835994 7f668e09d760 2 auth: KeyRing::load: load
On Tue, 25 Mar 2014, Michael Nelson wrote:
I am setting up a new test cluster on 0.78 using the same configuration that
was successful on 0.72. After creating a new S3 account, a simple operation
of listing buckets (which will be empty obviously) is resulting in
an HTTP 500 error.
Turned up
Hi there,
We got a case to use the C APIs of compound operation in librados. These
interfaces are only exported as C APIs from Firefly release, such as
https://github.com/ceph/ceph/blob/firefly/src/include/rados/librados.h#L1834.
But our RADOS deployment will stick to the Dumpling release right
53 matches
Mail list logo