applied
On Mon, Jul 31, 2017 at 11:33:16AM +0200, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler
> ---
> bin/init.d/Makefile | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/bin/init.d/Makefile b/bin/init.d/Makefile
> index d6ac3782..99ca432d 100644
> --- a/bin/init.d/M
On Mon, Jul 31, 2017 at 03:15:22PM +0200, Dominik Csapak wrote:
> this series fixes a few things with ceph luminous
>
> we now use crush_rule instead of crush_ruleset correctly
> we are able to set a different device for bluestore db/wal
> we correctly delete all partitions of osds when destroying
indentation was wrong on those lines, and js lint complains about
alias not being an array later, so make thoses lines not an array
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/OSD.js | 40
www/manager6/ceph/Pool.js | 4 ++--
2 files changed, 22
this patch does a few things
1. we introduce a new api call /nodes/nodename/ceph/rules
which gets us a list of crush rules
2. we introduce a new CephRuleSelector which is a simple combobox
with the data from the api call ceph/rules
3. we use this in the create pool window
Signed-off-by: D
Signed-off-by: Dominik Csapak
---
changes from v1:
* new in v2
www/manager6/ceph/Monitor.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/ceph/Monitor.js b/www/manager6/ceph/Monitor.js
index af7ffe7e..efa8239b 100644
--- a/www/manager6/ceph/Monitor.js
+++ b/www/
this uses the same icons for hosts/osds as in the resource tree,
and also uses the same arrow style
Signed-off-by: Dominik Csapak
---
changes from v1:
* give the root node the 'server' icon
www/manager6/ceph/OSD.js | 19 +++
1 file changed, 19 insertions(+)
diff --git a/www/mana
whenever a window is closed (creation, deletion) we want to reload the
pool grid, for not having to wait on the next refresh
Signed-off-by: Dominik Csapak
---
changes from v1:
* do not use unnecessary reload function but call rstore.load directly
www/manager6/ceph/Pool.js | 8 +++-
1 file ch
Signed-off-by: Dominik Csapak
---
changes from v1:
* do not localize 'Bluestore'
www/manager6/ceph/OSD.js | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index d1c57988..e962c882 100644
--- a/www/manager6/ceph/OSD.js
++
we get the names in the backend, and give them as an additional field
in the api call, and use it in the grid
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph.pm | 11 +++
www/manager6/ceph/Pool.js | 7 +++
2 files changed, 18 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/P
since ceph 12.1.1 the (deprecated) parameter 'crush_ruleset' is removed
and replaced with 'crush_rule' while changing this, change from
integer to string so that we can later use the names of the rules
instead of the id
(for now there seems to be a bug that you can only use the name and
not the id
we reuse the 'journal_dev' parameter for bluestores block.db
and add a new parameter 'wal_dev' for bluestores write ahead log
if only journal_dev is given, use it for both db and wal
Signed-off-by: Dominik Csapak
---
changes from v1:
* error out when wal_dev is given without bluestore
* check th
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph.pm | 2 +-
www/manager6/ceph/OSD.js | 6 ++
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index f3f313cd..f18b76cf 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -103,7 +103,7
we now have to remove 5 types of partitions:
data/metadata
journal
block
block.db
block.wal
this patch fixes the detection of block/block.db/block.wal
generalizes it
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph.pm | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
this series fixes a few things with ceph luminous
we now use crush_rule instead of crush_ruleset correctly
we are able to set a different device for bluestore db/wal
we correctly delete all partitions of osds when destroying
we improve the gui (bluestore checkbox, nicer icons, correct reload)
cha
On Thu, Jul 27, 2017 at 11:25:41AM +0200, Emmanuel Kasper wrote:
> It can happen that the qmp connection gets lost while mirroring a disk.
> In that case the current block job get cancelled, but the real cause of the
> failure
> is lost, becase we die() at a later step with the generic message
> "
LGTM, a few nit-picks below.
On Mon, Jul 31, 2017 at 11:28:32AM +0200, Dominik Csapak wrote:
> this series fixes a few things with ceph luminous
>
> we now use crush_rule instead of crush_ruleset correctly
pre-select the first (and in most cases, only) rule on the GUI for this.
> we are able to
Signed-off-by: Fabian Grünbichler
---
bin/init.d/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/bin/init.d/Makefile b/bin/init.d/Makefile
index d6ac3782..99ca432d 100644
--- a/bin/init.d/Makefile
+++ b/bin/init.d/Makefile
@@ -30,6 +30,8 @@ install: ${SCRIPTS}
install -m 06
we reuse the 'journal_dev' parameter for bluestores block.db
and add a new parameter 'wal_dev' for bluestores write ahead log
if only journal_dev is given, use it for both db and wal
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph.pm | 44
1 file cha
since ceph 12.1.1 the (deprecated) parameter 'crush_ruleset' is removed
and replaced with 'crush_rule' while changing this, change from
integer to string so that we can later use the names of the rules
instead of the id
(for now there seems to be a bug that you can only use the name and
not the id
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/OSD.js | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index 32b0a20a..9915d082 100644
--- a/www/manager6/ceph/OSD.js
+++ b/www/manager6/ceph/OSD.js
@@ -112,7 +112,12
this patchs does a few things
1. we introduce a new api call /nodes/nodename/ceph/rules
which gets us a list of crush rules
2. we introdcue a new CephRuleSelector which is a simple combobox
with the data from the api call ceph/rules
3. we use this in the create pool window
Signed-off-by:
we get the names in the backend, and give them as an additional field
in the api call, and use it in the grid
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph.pm | 11 +++
www/manager6/ceph/Pool.js | 7 +++
2 files changed, 18 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/P
this series fixes a few things with ceph luminous
we now use crush_rule instead of crush_ruleset correctly
we are able to set a different device for bluestore db/wal
we correctly delete all partitions of osds when destroying
we improve the gui (bluestore checkbox, nicer icons, correct reload)
Dom
whenever a window is closed (creation, deletion) we want to reload the
pool grid, for not having to wait on the next refresh
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/Pool.js | 12 +++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/www/manager6/ceph/Pool.js b/ww
this uses the same icons for hosts/osds as in the resource tree,
and also uses the same arrow style
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/OSD.js | 16
1 file changed, 16 insertions(+)
diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index a56e3070.
indentation was wrong on those lines, and js lint complains about
alias not being an array later, so make thoses lines not an array
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/OSD.js | 40
www/manager6/ceph/Pool.js | 4 ++--
2 files changed, 22
we now have to remove 5 types of partitions:
data/metadata
journal
block
block.db
block.wal
this patch fixes the detection of block/block.db/block.wal
generalizes it
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph.pm | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph.pm | 2 +-
www/manager6/ceph/OSD.js | 6 ++
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index fc76bfb2..8e792c4f 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -103,7 +103,7
Am Sonntag, den 30.07.2017, 10:52 +0200 schrieb Martin Lablans:
> Of course it would be preferable to leave this in the admin's hand
> via
> system-wide LVM configuration. This would also give Tom the
> flexibility
> for his setup. However, I don't know a way to achieve striping in
> LVM
> witho
29 matches
Mail list logo