Re: [Gluster-users] How to read geo replication timestamps from logs

2017-11-26 Thread Aravinda
Slave's time in log is not reporting Slave cluster's current time. It is 
logging the time till Slave cluster is in sync with respect to current 
Master node.


From the log information below,

Master node's time is 2017-11-22 00:59:40.610574
Files/dirs synced to Slave cluster from the current node till 
(1511312352, 0) ie, Tuesday, November 21, 2017 12:59:12 AM GMT


Geo-rep yet to sync the difference from current node to Slave cluster 
(2017-11-22 00:59:40.610574 - Tuesday, November 21, 2017 12:59:12 AM GMT)


Sorry for the wrong message from the log files. I will work on it to 
change the log message into meaningful.


--
regards
Aravinda VK
http://aravindavk.in



On Saturday 25 November 2017 07:48 AM, Mark Connor wrote:


Folks, need help interpreting this message from my geo rep logs for my 
volume mojo.


ssh%3A%2F%2Froot%40173.173.241.2%3Agluster%3A%2F%2F127.0.0.1%3Amojo-remote.log:[2017-11-22 
00:59:40.610574] I [master(/bricks/lsi/mojo):1125:crawl] _GMaster: 
slave's time: (1511312352, 0)


The epoch of 1511312352 is Tuesday, November 21, 2017 12:59:12 AM GMT.
The clocks are using the same ntp stratum and seem right on the money 
for master and slave.


Why the difference in seconds after taking into account the timezones?

Anyone know where both timestamps come from?


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] slow performance with export storage on glusterfs

2017-11-26 Thread Jiří Sléžka
On 11/24/2017 06:41 AM, Sahina Bose wrote:
> 
> 
> On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka  > wrote:
> 
> Hi,
> 
> On 11/22/2017 07:30 PM, Nir Soffer wrote:
> > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka  
> > >> wrote:
> >
> >     Hi,
> >
> >     I am trying realize why is exporting of vm to export storage on
> >     glusterfs such slow.
> >
> >     I am using oVirt and RHV, both instalations on version 4.1.7.
> >
> >     Hosts have dedicated nics for rhevm network - 1gbps, data
> storage itself
> >     is on FC.
> >
> >     GlusterFS cluster lives separate on 4 dedicated hosts. It has
> slow disks
> >     but I can achieve about 200-400mbit throughput in other
> applications (we
> >     are using it for "cold" data, backups mostly).
> >
> >     I am using this glusterfs cluster as backend for export
> storage. When I
> >     am exporting vm I can see only about 60-80mbit throughput.
> >
> >     What could be the bottleneck here?
> >
> >     Could it be qemu-img utility?
> >
> >     vdsm      97739  0.3  0.0 354212 29148 ?        S >     /usr/bin/qemu-img convert -p -t none -T none -f raw
> >   
>  
> /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >     -O raw
> >   
>  
> /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >
> >     Any idea how to make it work faster or what throughput should I
> >     expected?
> >
> >
> > gluster storage operations are using fuse mount - so every write:
> > - travel to the kernel
> > - travel back to the gluster fuse helper process
> > - travel to all 3 replicas - replication is done on client side
> > - return to kernel when all writes succeeded
> > - return to caller
> >
> > So gluster will never set any speed record.
> >
> > Additionally, you are copying from raw lv on FC - qemu-img cannot do
> > anything
> > smart and avoid copying unused clusters. Instead if copies
> gigabytes of
> > zeros
> > from FC.
> 
> ok, it does make sense
> 
> > However 7.5-10 MiB/s sounds too slow.
> >
> > I would try to test with dd - how much time it takes to copy
> > the same image from FC to your gluster storage?
> >
> > dd
> > 
> if=/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> > 
> of=/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/__test__
> > bs=8M oflag=direct status=progress
> 
> unfrotunately dd performs the same
> 
> 1778384896 bytes (1.8 GB) copied, 198.565265 s, 9.0 MB/s
> 
> 
> > If dd can do this faster, please ask on qemu-discuss mailing list:
> > https://lists.nongnu.org/mailman/listinfo/qemu-discuss
> 
> >
> > If both give similar results, I think asking in gluster mailing list
> > about this can help. Maybe your gluster setup can be optimized.
> 
> ok, this is definitly on the gluster side. Thanks for your guidance.
> 
> I will investigate the gluster side and also will try Export on NFS
> share.
> 
> 
> [Adding gluster users ml]
> 
> Please provide "gluster volume info" output for the rhv_export gluster
> volume and also volume profile details (refer to earlier mail from Shani
> on how to run this) while performing the dd operation above.

you can find all this output on https://pastebin.com/sBK01VS8

as mentioned in other posts. Gluster cluster uses really slow (green)
disks but without direct io it can achieve throughput around 400mbit/s.

This storage is used mostly for backup purposes. It is not used as a vm
storage.

In my case it would be nice not to use direct io in export case but I
understand why it might not be wise.

Cheers,

Jiri

> 
>  
> 
> 
> Cheers,
> 
> Jiri
> 
> 
> >
> > Nir
> >  
> >
> >
> >     Cheers,
> >
> >     Jiri
> >
> >
> >     ___
> >     Users mailing list
> >     us...@ovirt.org 
> >
> >     http://lists.ovirt.org/mailman/listinfo/users
> 
> >
> 
> 
> 
> ___
> Users mailing list
> us...@ovirt.org 
> http://lists.ovirt.org/mailma

Re: [Gluster-users] Changing performance.parallel-readdir to on causes CPU soft lockup and very high load all glusterd nodes

2017-11-26 Thread Niels Hendriks
Hi,

Just to update this thread. We updated from Gluster 3.12.2 to 3.12.3 which
resolved the issue it seems. I checked the changelog but don't see anything
that looks like this issue, but I'm glad it seems like it's OK now.

Niels Hendriks

On 14 November 2017 at 09:42, Niels Hendriks  wrote:

> Hi,
>
> We're using a 3-node setup where GlusterFS is running as both a client and
> a server with a fuse mount-point.
>
> We tried to change the performance.parallel-readdir setting to on for a
> volume, but after that the load on all 3 nodes skyrocketed due to the
> glusterd process and we saw CPU soft lockup errors in the console.
> I had to completely bring down/reboot all 3 nodes and disable the setting
> again.
>
> There were tons of errors like mentioned below, does anyone know what
> could cause this?
>
> Nov 10 20:55:53 n01c01 kernel: [196591.960126] BUG: soft lockup - CPU#6
> stuck for 22s! [touch:25995]
> Nov 10 20:55:53 n01c01 kernel: [196591.960168] Modules linked in:
> xt_multiport binfmt_misc hcpdriver(PO) nf_conntrack_ipv6 nf_defrag_ipv6
> ip6table_filter ip6_tables nf_conntrack_ipv4 nf_defrag_ipv4 xt_tcpudp
> xt_conntrack iptable_filter ip_tables x_tables xfs libcrc32c crc32c_generic
> nls_utf8 nls_cp437 vfat fat x86_pkg_temp_thermal coretemp iTCO_wdt
> iTCO_vendor_support kvm_intel kvm crc32_pclmul ast ttm aesni_intel
> drm_kms_helper efi_pstore aes_x86_64 lrw gf128mul evdev glue_helper
> ablk_helper joydev cryptd pcspkr efivars drm i2c_algo_bit mei_me lpc_ich
> mei mfd_core shpchp tpm_tis wmi tpm ipmi_si ipmi_msghandler acpi_pad
> processor button acpi_power_meter thermal_sys nf_conntrack_ftp nf_conntrack
> fuse autofs4 ext4 crc16 mbcache jbd2 dm_mod hid_generic usbhid hid sg
> sd_mod crc_t10dif crct10dif_generic ehci_pci xhci_hcd ehci_hcd ahci libahci
> i2c_i801 crct10dif_pclmul crct10dif_common libata ixgbe i2c_core
> crc32c_intel dca usbcore scsi_mod ptp usb_common pps_core nvme mdio
> Nov 10 20:55:53 n01c01 kernel: [196591.960224] CPU: 6 PID: 25995 Comm:
> touch Tainted: P   O  3.16.0-4-amd64 #1 Debian 3.16.43-2+deb8u5
> Nov 10 20:55:53 n01c01 kernel: [196591.960226] Hardware name: Supermicro
> SYS-1028U-TNR4T+/X10DRU-i+, BIOS 2.0c 04/21/2017
> Nov 10 20:55:53 n01c01 kernel: [196591.960228] task: 88184bf872f0 ti:
> 88182cbc task.ti: 88182cbc
> Nov 10 20:55:53 n01c01 kernel: [196591.960229] RIP:
> 0010:[]  [] _raw_spin_lock+0x25/0x30
> Nov 10 20:55:53 n01c01 kernel: [196591.960237] RSP: 0018:88182cbc3b78
> EFLAGS: 0287
> Nov 10 20:55:53 n01c01 kernel: [196591.960239] RAX: 5e5c RBX:
> 811646a5 RCX: 5e69
> Nov 10 20:55:53 n01c01 kernel: [196591.960240] RDX: 5e69 RSI:
> 811bffa0 RDI: 88182e42bc70
> Nov 10 20:55:53 n01c01 kernel: [196591.960241] RBP: 88182e42bc18 R08:
> 0028 R09: 0001
> Nov 10 20:55:53 n01c01 kernel: [196591.960242] R10: 00012f40 R11:
> 0010 R12: 88182cbc3af0
> Nov 10 20:55:53 n01c01 kernel: [196591.960243] R13: 0286 R14:
> 0010 R15: 81519fce
> Nov 10 20:55:53 n01c01 kernel: [196591.960244] FS:  7fc005c67700()
> GS:88187fc8() knlGS:
> Nov 10 20:55:53 n01c01 kernel: [196591.960246] CS:  0010 DS:  ES: 
> CR0: 80050033
> Nov 10 20:55:53 n01c01 kernel: [196591.960247] CR2: 7f498c1ab148 CR3:
> 000b0fcf4000 CR4: 003407e0
> Nov 10 20:55:53 n01c01 kernel: [196591.960248] DR0:  DR1:
>  DR2: 
> Nov 10 20:55:53 n01c01 kernel: [196591.960249] DR3:  DR6:
> fffe0ff0 DR7: 0400
> Nov 10 20:55:53 n01c01 kernel: [196591.960250] Stack:
> Nov 10 20:55:53 n01c01 kernel: [196591.960251]  81513196
> 88182e42bc18 880aa062e8d8 880aa062e858
> Nov 10 20:55:53 n01c01 kernel: [196591.960254]  811c1120
> 88182cbc3bd0 88182e42bc18 00032b68
> Nov 10 20:55:53 n01c01 kernel: [196591.960256]  88182943b010
> 811c1988 88182e42bc18 8817de0db218
> Nov 10 20:55:53 n01c01 kernel: [196591.960258] Call Trace:
> Nov 10 20:55:53 n01c01 kernel: [196591.960263]  [] ?
> lock_parent.part.17+0x1d/0x43
> Nov 10 20:55:53 n01c01 kernel: [196591.960268]  [] ?
> shrink_dentry_list+0x1f0/0x240
> Nov 10 20:55:53 n01c01 kernel: [196591.960270]  [] ?
> check_submounts_and_drop+0x68/0x90
> Nov 10 20:55:53 n01c01 kernel: [196591.960278]  [] ?
> fuse_dentry_revalidate+0x1e8/0x300 [fuse]
> Nov 10 20:55:53 n01c01 kernel: [196591.960281]  [] ?
> lookup_fast+0x25e/0x2b0
> Nov 10 20:55:53 n01c01 kernel: [196591.960283]  [] ?
> link_path_walk+0x1ab/0x870
> Nov 10 20:55:53 n01c01 kernel: [196591.960285]  [] ?
> path_openat+0x9c/0x680
> Nov 10 20:55:53 n01c01 kernel: [196591.960289]  [] ?
> handle_mm_fault+0x63c/0x1150
> Nov 10 20:55:53 n01c01 kernel: [196591.960292]  [] ?
> mmap_region+0x19c/0x650
> Nov 10 20:55:53 n01c01 kernel: [196591.960294]  [] ?
> do_filp_op

Re: [Gluster-users] Gluster Developer Conversations - Nov 28 at 15:00 UTC

2017-11-26 Thread Amye Scavarda
Quick reminder here:
15:00 UTC on Tuesday, November 28th.

Thus far, we've just got Raghavendra Talur but we'll have places for 4
more talks if you're interested!
-- 

Here's the meeting invite:

To join the meeting on a computer or mobile phone:
https://bluejeans.com/6203770120?src=calendarLink

Amye Scavarda has invited you to a video meeting.

To join from a Red Hat Deskphone or Softphone, dial: 84336.
---
Connecting directly from a room system?

1.) Dial: 199.48.152.152 or bjn.vc
2.) Enter Meeting ID: 6203770120

Just want to dial in on your phone?

1.) Dial one of the following numbers:
 408-915-6466 (US)
 +1.888.240.2560 (US Toll Free)

See all numbers: https://www.redhat.com/en/conference-numbers
2.) Enter Meeting ID: 6203770120

3.) Press #

---
Want to test your video connection?
https://bluejeans.com/111

On Mon, Nov 6, 2017 at 2:59 PM, Amye Scavarda  wrote:
> Awesome!
> You're on the list.
> Anyone else want to present?
> - amye
>
> On Fri, Nov 3, 2017 at 3:01 AM, Raghavendra Talur  wrote:
>> I propose a talk
>>
>> "Life of a gluster client process"
>>
>> We will have a look at one complete life cycle of a client process
>> which includes:
>> * mount script and parsing of args
>> * contacting glusterd and fetching volfile
>> * loading and initializing the xlators
>> * how glusterd sends updates of volume options
>> * brick disconnection/reconnection
>> * glusterd disconnection/reconnection
>> * termination of mount
>>
>> Raghavendra Talur
>>
>>
>>
>> On Wed, Nov 1, 2017 at 9:43 PM, Amye Scavarda  wrote:
>>> Hi all!
>>> Based on the popularity of wanting more lightning talks at Gluster
>>> Summit, we'll be trying something new: Gluster Developer
>>> Conversations. This will be a one hour meeting on November 28th at UTC
>>> 15:00, with five 5 minute lightning talks and time for discussion in
>>> between. The meeting will be recorded, and I'll be posting the
>>> individual talks separately in our community channels.
>>>
>>> What would you want to talk about?
>>> Respond on this thread, and if I get more than five, we'll schedule
>>> following meetings.
>>> Thanks!
>>> - amye
>>>
>>> --
>>> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead



-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users