Hi,
I had server failure that starts from one disk failure:
Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.023986] sd 4:2:26:0:
[sdaa] Unhandled error code
Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.023990] sd 4:2:26:0:
[sdaa] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
Oct 14 03:25:04
hi all
I follow the mail configure the ceph with hadoop
(http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/1809).
1. Install additional packages: libcephfs-java libcephfs-jni using the
commonds:
./configure --enable-cephfs-java
make make install
cp
Hi
I have found somthing.
After restart time was wrong on server (+2hours) before ntp has fixed it.
I restarted this 3 osd - it not helps.
It is possible that ceph banned this osd? Or after start with wrong
time osd has broken hi's filestore?
--
Regards
Dominik
2013/10/14 Dominik Mostowiec
We upgraded from 0.61.8 to 0.67.4.
The metadata commands works for the users and the buckets:
root@ineri ~$ radosgw-admin metadata list bucket
[
a4mesh,
61a75c04-34a5-11e3-9bea-8f8d15b5cf20,
6e22de72-34a5-11e3-afc4-d3f70b676c52,
...
root@ineri ~$ radosgw-admin metadata list user
[
|
hi all
I follow the mail configure the ceph with hadoop
(http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/1809).
1. Install additional packages: libcephfs-java libcephfs-jni using the
commonds:
./configure --enable-cephfs-java
make make install
Hi sorry, i missed this mail.
During writes, does the CPU usage on your RadosGW node go way up?
No, CPU stay the same very low ( 10%)
When upload small files(300KB/file) over RadosGW:
- using 1 process: upload bandwidth ~ 3MB/s
- using 100 processes: upload bandwidth ~ 15MB/s
When upload
Hi ceph-users,
I uploaded an object successfully to radosgw with 3 replicas. And I located
all the physical paths of 3 replicas on different OSDs.
i.e, one of the 3 physical paths is
/var/lib/ceph/osd/ceph-2/current/3.5_head/DIR_D/default.4896.65\\u20131014\\u1__head_0646563D__3
Then I manually
On Sun, Oct 13, 2013 at 8:28 PM, 鹏 wkp4...@126.com wrote:
hi all:
Exception in thread main java.lang.NoClassDefFoundError:
com/ceph/fs/cephFileAlreadyExisteException
at java.lang.class.forName0(Native Method)
This looks like a bug, which I'll fixup today. But it shouldn't be
The error below seems to indicate that Hadoop isn't aware of the `ceph://`
file system. You'll need to manually add this to your core-site.xml:
* property** namefs.ceph.impl/name**
valueorg.apache.hadoop.fs.ceph.CephFileSystem/value** /property*
report:FileSystem
Do you have the following in your core-site.xml?
property
namefs.ceph.impl/name
valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
/property
On Sun, Oct 13, 2013 at 11:55 PM, 鹏 wkp4...@126.com wrote:
hi all
I follow the mail configure the ceph with hadoop
On Mon, Oct 14, 2013 at 4:04 AM, david zhang zhang.david2...@gmail.com wrote:
Hi ceph-users,
I uploaded an object successfully to radosgw with 3 replicas. And I located
all the physical paths of 3 replicas on different OSDs.
i.e, one of the 3 physical paths is
I've personally saturated 1Gbps links on multiple radosgw nodes on a large
cluster, if I remember correctly, Yehuda has tested it up into the 7Gbps
range with 10Gbps gear. Could you describe your clusters hardware and
connectivity?
On Mon, Oct 14, 2013 at 3:34 AM, Chu Duc Minh
3 questions:
1. I'd like to use xfs devices with a separate log device in a ceph cluster.
What's the best way to do this? Is it possible to specify xfs log devices in
the [osd.x] sections of ceph.conf?
E.G.:
[osd.0]
host = delta
devs = /dev/sdx
Hi,
I have a pretty big problem here... my OSDs are marked down (except one?!)
I have ceph ceph version 0.61.8 (a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b).
I recently had a full monitors so I had to remove them but it seemed to
work.
# idweighttype nameup/downreweight
-1
Hello,
I would like to live migrate a VM between two hypervisors. Is it
possible to do this with a rbd disk or should the vm disks be created as
qcow images on a CephFS/NFS share (is it possible to do clvm over rbds? OR
GlusterFS over rbds?)and point kvm at the network directory. As I
How fragmented is that file system?
Sent from my iPad
On Oct 14, 2013, at 5:44 PM, Bryan Stillwell bstillw...@photobucket.com
wrote:
This appears to be more of an XFS issue than a ceph issue, but I've
run into a problem where some of my OSDs failed because the filesystem
was reported as
I live migrate all the time using the rbd driver in qemu, no problems. Qemu
will issue a flush as part of the migration so everything is consistent. It's
the right way to use ceph to back vm's. I would strongly recommend against a
network file system approach. You may want to look into
The filesystem isn't as full now, but the fragmentation is pretty low:
[root@den2ceph001 ~]# df /dev/sdc1
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdc1486562672 270845628 215717044 56% /var/lib/ceph/osd/ceph-1
[root@den2ceph001 ~]# xfs_db -c frag -r
Hi,
I have a 4-node Ceph cluster(2 mon, 1 mds, 2 osd) and a Hadoop node.
Currently, I'm trying to replace HDFS with CephFS. I followed the instructions
in USING HADOOP WITH CEPHFS. But every time I run bin/start-all.sh to run
Hadoop, it failed with:
starting namenode, logging to
Hi Kai,
It doesn't look like there is anything Ceph specific in the Java
backtrace you posted. Does you installation work with HDFS? Are there
any logs where an error is occurring with the Ceph plugin?
Thanks,
Noah
On Mon, Oct 14, 2013 at 4:34 PM, log1024 log1...@yeah.net wrote:
Hi,
I have a
On 10/13/2013 07:43 PM, alan.zhang wrote:
CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz *2
MEM: 32GB
KVM: qemu-kvm-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64
Host: CentOS 6.4, kernel 2.6.32-358.14.1.el6.x86_64
Guest: CentOS 6.4, kernel 2.6.32-279.14.1.el6.x86_64
Ceph: ceph version
root@ineri:~# radosgw-admin user info
could not fetch user info: no user info saved
Hi Valery,
You need to use
radosgw-admin metadata list user
Thanks,
derek
--
---
Derek T. Yarnell
University of Maryland
Institute for Advanced Computer Studies
22 matches
Mail list logo