Re: [Gluster-users] How to read geo replication timestamps from logs

2017-11-26 Thread Aravinda
Slave's time in log is not reporting Slave cluster's current time. It is 
logging the time till Slave cluster is in sync with respect to current 
Master node.


From the log information below,

Master node's time is 2017-11-22 00:59:40.610574
Files/dirs synced to Slave cluster from the current node till 
(1511312352, 0) ie, Tuesday, November 21, 2017 12:59:12 AM GMT


Geo-rep yet to sync the difference from current node to Slave cluster 
(2017-11-22 00:59:40.610574 - Tuesday, November 21, 2017 12:59:12 AM GMT)


Sorry for the wrong message from the log files. I will work on it to 
change the log message into meaningful.


--
regards
Aravinda VK
http://aravindavk.in



On Saturday 25 November 2017 07:48 AM, Mark Connor wrote:


Folks, need help interpreting this message from my geo rep logs for my 
volume mojo.


ssh%3A%2F%2Froot%40173.173.241.2%3Agluster%3A%2F%2F127.0.0.1%3Amojo-remote.log:[2017-11-22 
00:59:40.610574] I [master(/bricks/lsi/mojo):1125:crawl] _GMaster: 
slave's time: (1511312352, 0)


The epoch of 1511312352 is Tuesday, November 21, 2017 12:59:12 AM GMT.
The clocks are using the same ntp stratum and seem right on the money 
for master and slave.


Why the difference in seconds after taking into account the timezones?

Anyone know where both timestamps come from?


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] slow performance with export storage on glusterfs

2017-11-26 Thread Jiří Sléžka
On 11/24/2017 06:41 AM, Sahina Bose wrote:
> 
> 
> On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka  > wrote:
> 
> Hi,
> 
> On 11/22/2017 07:30 PM, Nir Soffer wrote:
> > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka  
> > >> wrote:
> >
> >     Hi,
> >
> >     I am trying realize why is exporting of vm to export storage on
> >     glusterfs such slow.
> >
> >     I am using oVirt and RHV, both instalations on version 4.1.7.
> >
> >     Hosts have dedicated nics for rhevm network - 1gbps, data
> storage itself
> >     is on FC.
> >
> >     GlusterFS cluster lives separate on 4 dedicated hosts. It has
> slow disks
> >     but I can achieve about 200-400mbit throughput in other
> applications (we
> >     are using it for "cold" data, backups mostly).
> >
> >     I am using this glusterfs cluster as backend for export
> storage. When I
> >     am exporting vm I can see only about 60-80mbit throughput.
> >
> >     What could be the bottleneck here?
> >
> >     Could it be qemu-img utility?
> >
> >     vdsm      97739  0.3  0.0 354212 29148 ?        S >     /usr/bin/qemu-img convert -p -t none -T none -f raw
> >   
>  
> /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >     -O raw
> >   
>  
> /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >
> >     Any idea how to make it work faster or what throughput should I
> >     expected?
> >
> >
> > gluster storage operations are using fuse mount - so every write:
> > - travel to the kernel
> > - travel back to the gluster fuse helper process
> > - travel to all 3 replicas - replication is done on client side
> > - return to kernel when all writes succeeded
> > - return to caller
> >
> > So gluster will never set any speed record.
> >
> > Additionally, you are copying from raw lv on FC - qemu-img cannot do
> > anything
> > smart and avoid copying unused clusters. Instead if copies
> gigabytes of
> > zeros
> > from FC.
> 
> ok, it does make sense
> 
> > However 7.5-10 MiB/s sounds too slow.
> >
> > I would try to test with dd - how much time it takes to copy
> > the same image from FC to your gluster storage?
> >
> > dd
> > 
> if=/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> > 
> of=/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/__test__
> > bs=8M oflag=direct status=progress
> 
> unfrotunately dd performs the same
> 
> 1778384896 bytes (1.8 GB) copied, 198.565265 s, 9.0 MB/s
> 
> 
> > If dd can do this faster, please ask on qemu-discuss mailing list:
> > https://lists.nongnu.org/mailman/listinfo/qemu-discuss
> 
> >
> > If both give similar results, I think asking in gluster mailing list
> > about this can help. Maybe your gluster setup can be optimized.
> 
> ok, this is definitly on the gluster side. Thanks for your guidance.
> 
> I will investigate the gluster side and also will try Export on NFS
> share.
> 
> 
> [Adding gluster users ml]
> 
> Please provide "gluster volume info" output for the rhv_export gluster
> volume and also volume profile details (refer to earlier mail from Shani
> on how to run this) while performing the dd operation above.

you can find all this output on https://pastebin.com/sBK01VS8

as mentioned in other posts. Gluster cluster uses really slow (green)
disks but without direct io it can achieve throughput around 400mbit/s.

This storage is used mostly for backup purposes. It is not used as a vm
storage.

In my case it would be nice not to use direct io in export case but I
understand why it might not be wise.

Cheers,

Jiri

> 
>  
> 
> 
> Cheers,
> 
> Jiri
> 
> 
> >
> > Nir
> >  
> >
> >
> >     Cheers,
> >
> >     Jiri
> >
> >
> >     ___
> >     Users mailing list
> >     us...@ovirt.org 
> >
> >     http://lists.ovirt.org/mailman/listinfo/users
> 
> >
> 
> 
> 
> ___
> Users mailing list
> 

Re: [Gluster-users] Gluster Developer Conversations - Nov 28 at 15:00 UTC

2017-11-26 Thread Amye Scavarda
Quick reminder here:
15:00 UTC on Tuesday, November 28th.

Thus far, we've just got Raghavendra Talur but we'll have places for 4
more talks if you're interested!
-- 

Here's the meeting invite:

To join the meeting on a computer or mobile phone:
https://bluejeans.com/6203770120?src=calendarLink

Amye Scavarda has invited you to a video meeting.

To join from a Red Hat Deskphone or Softphone, dial: 84336.
---
Connecting directly from a room system?

1.) Dial: 199.48.152.152 or bjn.vc
2.) Enter Meeting ID: 6203770120

Just want to dial in on your phone?

1.) Dial one of the following numbers:
 408-915-6466 (US)
 +1.888.240.2560 (US Toll Free)

See all numbers: https://www.redhat.com/en/conference-numbers
2.) Enter Meeting ID: 6203770120

3.) Press #

---
Want to test your video connection?
https://bluejeans.com/111

On Mon, Nov 6, 2017 at 2:59 PM, Amye Scavarda  wrote:
> Awesome!
> You're on the list.
> Anyone else want to present?
> - amye
>
> On Fri, Nov 3, 2017 at 3:01 AM, Raghavendra Talur  wrote:
>> I propose a talk
>>
>> "Life of a gluster client process"
>>
>> We will have a look at one complete life cycle of a client process
>> which includes:
>> * mount script and parsing of args
>> * contacting glusterd and fetching volfile
>> * loading and initializing the xlators
>> * how glusterd sends updates of volume options
>> * brick disconnection/reconnection
>> * glusterd disconnection/reconnection
>> * termination of mount
>>
>> Raghavendra Talur
>>
>>
>>
>> On Wed, Nov 1, 2017 at 9:43 PM, Amye Scavarda  wrote:
>>> Hi all!
>>> Based on the popularity of wanting more lightning talks at Gluster
>>> Summit, we'll be trying something new: Gluster Developer
>>> Conversations. This will be a one hour meeting on November 28th at UTC
>>> 15:00, with five 5 minute lightning talks and time for discussion in
>>> between. The meeting will be recorded, and I'll be posting the
>>> individual talks separately in our community channels.
>>>
>>> What would you want to talk about?
>>> Respond on this thread, and if I get more than five, we'll schedule
>>> following meetings.
>>> Thanks!
>>> - amye
>>>
>>> --
>>> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead



-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users