I got error on this:
sysbench
--test=/usr/share/sysbench/tests/include/oltp_legacy/parallel_prepare.lua
--mysql-host=127.0.0.1 --mysql-port=33033 --mysql-user=sysbench
--mysql-password=password --mysql-db=sysbench
--mysql-table-engine=innodb --db-driver=mysql --oltp_tables_count=10
Could anyone run the tests? and share some results..
Thanks in advance,
Best,
*German*
2017-11-30 14:25 GMT-03:00 German Anders :
> That's correct, IPoIB for the backend (already configured the irq
> affinity), and 10GbE on the frontend. I would love to try rdma but
That's correct, IPoIB for the backend (already configured the irq
affinity), and 10GbE on the frontend. I would love to try rdma but like
you said is not stable for production, so I think I'll have to wait for
that. Yeah, the thing is that it's not my decision to go for 50GbE or
100GbE... :( so..
On 2017-11-27 14:02, German Anders wrote:
4x 2U servers:
1x 82599ES 10-Gigabit SFI/SFP+ Network Connection
1x Mellanox ConnectX-3 InfiniBand FDR 56Gb/s Adapter (dual port)
so I assume you are using IPoIB as the cluster network for the
replication...
1x OneConnect 10Gb NIC (quad-port) -
o:gand...@despegar.com]
>> Sent: dinsdag 28 november 2017 19:34
>> To: Luis Periquito
>> Cc: ceph-users
>> Subject: Re: [ceph-users] ceph all-nvme mysql performance tuning
>>
>> Thanks a lot Luis, I agree with you regarding the CPUs, but
>> unfortunately those were t
>
>
> -Original Message-
> From: German Anders [mailto:gand...@despegar.com]
> Sent: dinsdag 28 november 2017 19:34
> To: Luis Periquito
> Cc: ceph-users
> Subject: Re: [ceph-users] ceph all-nvme mysql performance tuning
>
> Thanks a lot Luis, I agree with you regarding th
Cc: ceph-users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] ceph all-nvme mysql
performance
tuning
Hi Maged,
Thanks a lot for the response. We t
h-users [mailto:ceph-users-boun...@lists.ceph.com] *On
>>> Behalf Of *German Anders
>>> *Sent:* 27 November 2017 14:44
>>> *To:* Maged Mokhtar <mmokh...@petasan.org>
>>> *Cc:* ceph-users <ceph-users@lists.ceph.com>
>>> *Subject:* Re: [ceph-us
2017-11-27 12:16 GMT-03:00 Nick Fisk <n...@fisk.me.uk>:
>
>> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
>> Of *German Anders
>> *Sent:* 27 November 2017 14:44
>> *To:* Maged Mokhtar <mmokh...@petasan.org>
>> *Cc:* ceph-user
;mmokh...@petasan.org>
> *Cc:* ceph-users <ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] ceph all-nvme mysql performance tuning
>
>
>
> Hi Maged,
>
>
>
> Thanks a lot for the response. We try with different number of threads and
> we're getting almost
eph-users-boun...@lists.ceph.com> on behalf of
> German Anders <gand...@despegar.com>
> *Date: *Monday, November 27, 2017 at 8:44 AM
> *To: *Maged Mokhtar <mmokh...@petasan.org>
> *Cc: *ceph-users <ceph-users@lists.ceph.com>
> *Subject: *Re: [ceph-users] ceph
7, 2017 at 8:44 AM
To: Maged Mokhtar <mmokh...@petasan.org>
Cc: ceph-users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] ceph all-nvme mysql performance tuning
Hi Maged,
Thanks a lot for the response. We try with different number of threads and
we're getting almost the same ki
14:44
>> *To:* Maged Mokhtar <mmokh...@petasan.org>
>> *Cc:* ceph-users <ceph-users@lists.ceph.com>
>> *Subject:* Re: [ceph-users] ceph all-nvme mysql performance tuning
>>
>>
>>
>> Hi Maged,
>>
>>
>>
>> Thanks a lot for
; Of *German Anders
> *Sent:* 27 November 2017 14:44
> *To:* Maged Mokhtar <mmokh...@petasan.org>
> *Cc:* ceph-users <ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] ceph all-nvme mysql performance tuning
>
>
>
> Hi Maged,
>
>
>
> Thanks a l
Hi German,
We have similar config:
proxmox-ve: 5.1-27 (running kernel: 4.13.8-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.8-1-pve: 4.13.8-27
ceph: 12.2.1-pve3
system(4 nodes): Supermicro 2028U-TN24R4T+
2 port Mellanox connect x3pro 56Gbit
4 port intel 10GigE
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of German
Anders
Sent: 27 November 2017 14:44
To: Maged Mokhtar <mmokh...@petasan.org>
Cc: ceph-users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] ceph all-nvme mysql performance tuning
Hi Maged,
Hi Maged,
Thanks a lot for the response. We try with different number of threads and
we're getting almost the same kind of difference between the storage types.
Going to try with different rbd stripe size, object size values and see if
we get more competitive numbers. Will get back with more
On 2017-11-27 15:02, German Anders wrote:
> Hi All,
>
> I've a performance question, we recently install a brand new Ceph cluster
> with all-nvme disks, using ceph version 12.2.0 with bluestore configured. The
> back-end of the cluster is using a bond IPoIB (active/passive) , and for the
>
Hi Wido, thanks a lot for the quick response, regarding the questions:
Have you tried to attach multiple RBD volumes:
- Root for OS (the root partition has local SSDs)
- MySQL data dir (the idea is to have all the storage tests with the same
scheme, the first test is using one volume and put the
> Op 27 november 2017 om 14:14 schreef German Anders :
>
>
> Hi Jason,
>
> We are using librbd (librbd1-0.80.5-9.el6.x86_64), ok I will change those
> parameters and see if that changes something
>
0.80? Is that a typo? You should really use 12.2.1 on the client.
Wido
> Op 27 november 2017 om 14:02 schreef German Anders :
>
>
> Hi All,
>
> I've a performance question, we recently install a brand new Ceph cluster
> with all-nvme disks, using ceph version 12.2.0 with bluestore configured.
> The back-end of the cluster is using a bond
Hi Jason,
We are using librbd (librbd1-0.80.5-9.el6.x86_64), ok I will change those
parameters and see if that changes something
thanks a lot
best,
*German*
2017-11-27 10:09 GMT-03:00 Jason Dillaman :
> Are you using krbd or librbd? You might want to consider "debug_ms
Are you using krbd or librbd? You might want to consider "debug_ms = 0/0"
as well since per-message log gathering takes a large hit on small IO
performance.
On Mon, Nov 27, 2017 at 8:02 AM, German Anders wrote:
> Hi All,
>
> I've a performance question, we recently install
Hi All,
I've a performance question, we recently install a brand new Ceph cluster
with all-nvme disks, using ceph version 12.2.0 with bluestore configured.
The back-end of the cluster is using a bond IPoIB (active/passive) , and
for the front-end we are using a bonding config with active/active
24 matches
Mail list logo