Hi,
which repository should i take for Luminous under Ubuntu 17.10?
I want a total new install with ceph-deploy, no upgrade.
Is there any good tutorial for fresh install incl. bluestor?
--
MfG,
Markus Goldberg
--
Markus
x.xxx.xxx.21:6789/0,bd-2=xxx.xxx.xxx.22:6789/0}
election epoch 6, quorum 0,1,2 bd-0,bd-1,bd-2
osdmap e91: 90 osds: 0 up, 0 in
flags sortbitwise
pgmap v92: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating
r
.00 (sum of its item)
item node2 weight 2.00
}
Then you can use default ruleset. It is set to take the root "default".
2016-03-21 19:50 GMT+08:00 Markus Goldberg <mailto:goldb...@uni-hildesheim.de>>:
Hi desmond,
this is my decompile_map:
root@bd-a:/etc/ceph#
s
117 MB used, 189 TB / 189 TB avail
992 active+clean
root@bd-a:~#
Is it possible for the author of ceph-deploy, to make the reboot
needlessly during these 2 steps ?
Then it would also be possible to use create instead of prepare+activate
Thank you,
Markus
Am 11.06.2014 16:47, schrieb Alfredo Deza:
On Wed, Jun 11, 2014 at 9:29 AM, Markus Goldberg
wrote:
Hi,
ceph-deploy-1.5.3 can make trouble, if a reboot is done between preparation
and aktivation of an osd:
The osd-disk was /dev/sdb at this time, osd itself should go to sdb1,
formatted to
ble at the moment or am i mistyping?
BTW: Deleting or shrinking an empty image takes very, very lonng
Thank you,
Markus
------
Markus Goldberg Universität Hildesheim
Rechenzentrum
Tel +49 5121 8
, how many cores?
Does anyone know if M.2-SSDs are supported in their pci-e-slots?
Thank you very much,
Markus
--
Markus Goldberg Universität Hildesheim
Rechenzentrum
Tel +49 5121 883
think about higher spec.
Does any one know about the exact processor requirement of 30 drives
node for erasure coding? . I can't find suitable hardware
recommendation for erasure coding.
Cheers
K.Mohamed Pakkeer
On Thu, Apr 9, 2015 at 1:30 PM, Markus Goldberg
mailto:goldb...@uni-hildeshe
osd][DEBUG ] Host bd-2 is now ready for osd use.
root@bd-a:/etc/ceph#
--
MfG,
Markus Goldberg
--
Markus Goldberg Universität Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Universitätsplatz 1, D-311
money?
--
MfG,
Markus Goldberg
--
Markus Goldberg Universität Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Universitätsplatz 1, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldb
-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Markus
Goldberg
Sent: Thursday, April 23, 2015 5:41 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] SAS-Exp 9300-8i or Raid-Contr 9750-4i ?
Hi,
i will upgrade my existing Hardware (in 3 SC847-cases with 30 HDDs each) the
next
ce-1
btrfs devs = /dev/sdb1
[osd.2]
host = ce-2
btrfs devs = /dev/sdb1
[mds.a]
host = ce-0
Thank you,
Markus
----
Markus Goldberg | Universität Hildesheim
| Rechenzentrum
Tel +49 5121 8
Hi,
i'm at the step ADD/REMOVE OSDS.
I want to use '--fs-type btrfs' but every single command says:
'ceph-deploy: error: unrecognized arguments: --fstype btrfs'
If i use it without this parameter an ufs-disk is created.
What i'm doing wron
.
So the device-name for the osd (raid6-array) changed from sdb to sdc.
In ceph.conf there ist no more a device-entry for the OSDs.
What should i do ?
--
MfG,
Markus Goldberg
Markus Goldberg | Universität Hilde
Am 05.06.2013 09:42, schrieb Markus Goldberg:
Hi,
i have cuttlefish and i'm using ceph-deploy.
My ceph-conf is this:
*fsid = 775cb230-1b4c-41fb-8473-5b92cexx**
**mon_initial_members = bd-0, bd-1, bd-2**
**mon_host = 147.172.xxx.x0,147.172.xxx.x1,147.172.xxx.x2**
**auth_supported =
/myceph/Test# df -h .
Dateisystem GröÃe Benutzt Verf. Verw% Eingehängt auf
###.###.###.20:6789:/ 240G 25M 240G1% /mnt/myceph
BTW /dev/sda on the servers are 256GB-SSDs
Can anyone please help ?
Thank you, Markus
--
MfG,
Markus Goldberg
tched this behavior to
hopefully do things the better/"more right" way for the future, but it's
possible you have an odd version or combination that gives goofy results.
sage
On Wed, 12 Jun 2013, Markus Goldberg wrote:
Hi,
this is cuttlefish 0.63 on Ubuntu 13.04, underlying OSD-FS
fix statvfs fr_size":
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=92a49fb0f79f3300e6e50ddf56238e70678e4202
(Kernels 3.9-rc1 and later should include it.)
Cheers,
David
--
MfG,
Markus Goldberg
---
eph xxx.xxx.xxx.xxx:6789:/dir1/dir2 /mnt/myceph -v -o
name=admin,secretfile=/etc/ceph/admin.secret'
(admin.secret is the key for the data-rootdir (/) )
how can i give specific clients read/write access to only a subset of
, see http://ceph.com/docs/master/install/debian
* For RPMs, see http://ceph.com/docs/master/install/rpm
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
MfG,
Marku
__
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
MfG,
Markus Goldberg
----
Markus Goldberg | Universität Hildesheim
| Rechenzentru
Does noone have an idea ?
I can't mount the cluster anymore.
Thank you,
Markus
Am 10.09.2013 09:43, schrieb Markus Goldberg:
Hi,
i made a 'stop ceph-all' on my ceph-admin-host and then a
kernel-upgrade from 3.9 to 3.11 on all of my 3 nodes.
Ubuntu 13.04, ceph 0,68
The
arkus Goldberg:
Does noone have an idea ?
I can't mount the cluster anymore.
Thank you,
Markus
Am 10.09.2013 09:43, schrieb Markus Goldberg:
Hi,
i made a 'stop ceph-all' on my ceph-admin-host and then a
kernel-upgrade from 3.9 to 3.11 on all of my 3 nodes.
Ubuntu 13.04, ceph 0,6
?
Can this be done without losing existing data ?
Thank you very much,
Markus
--
Markus Goldberg Universität Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hilde
et the limit on cluster creation by adding
mds max file size = 100
(or whatever) to your ceph.conf before creating the monitors.
sage
On Thu, 9 Jan 2014, Markus Goldberg wrote:
Hi Sage,
that sounds good.
Thank you very much,
Markus
Am 09.01.2014 13:10, schrieb Sage Weil:
Hi,
can someone please mail the correct ownership and permissions of the
dirs and files in /etc/ceph and /var/lib/ceph ?
Thank you,
Markus
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
bled. I think it is Crush V2.
What should i do ?
Server and Client are Ubuntu 13.04, Kernel 3.12.6. I also tried 3.13
but without success.
This is ceph 0.75
--
MfG,
Markus Goldberg
------
Markus Goldberg Universit
be safely
rebased onto 3.13 final too.)
Thanks,
Ilya
--
MfG,
Markus Goldberg
------
Markus Goldberg Universität Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hilde
g bad ...
--
MfG,
Markus Goldberg
------
Markus Goldberg Universität Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 em
have not counted all filesizes, but 21T seems to be correct
--
MfG,
Markus Goldberg
--
Markus Goldberg Universität Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141
of sparse files in cephfs.
For sparse files, cephfs increase the "used" space by the full file size. See
http://ceph.com/docs/next/dev/differences-from-posix/
Yan, Zheng
On Fri, Feb 21, 2014 at 6:13 PM, Markus Goldberg
wrote:
Hi,
this is ceph 0.77, Ubuntu 13.04 (ceph-server and ceph clien
ter the kernel in 13.04.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Fri, Feb 21, 2014 at 10:55 AM, Markus Goldberg
wrote:
Hi,
no, it's sure that the backup-files are so big. The output of the du-command
is correct.
The files were rsynced from an other system, which i
d-a:/mnt/myceph#
Am 25.02.2014 07:39, schrieb Gregory Farnum:
Hrm, yeah, that patch actually went in prior to 3.9 (it's older than I
remember!). What's the output of "ls -l" from the root of the Ceph
hierarchy, and what's the output of "ceph osd dump"?
-Greg
was a must.
So the question is: Are Journals obsolet now ?
--
MfG,
Markus Goldberg
--
Markus Goldberg Universität Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-
fying them with exactly the same syntax as before.
The page you're looking at is the simplified "quick start", the detail
on osd creation including journals is here:
http://eu.ceph.com/docs/v0.77/rados/deployment/ceph-deploy-osd/
Cheers,
John
On Fri, Mar 14, 2014 at 9:47 AM, Markus Go
35 matches
Mail list logo