Hello all!
As you probably already know, the lag situation on WDQS is not improving as
much as we'd like. Over the past week, we've managed to keep the lag mostly
below 3 hours, but at the cost of a lot of manual work. And yes, we know
that 3 hours of lag is already too much.
Some updates on what
I don't know if there is actually someone who would be capable and have the
time to do so, I just would hope there are such people - but it probably
makes sense to check if there are actually volunteers before doing work to
enable them :)
On Fri, Nov 15, 2019 at 5:17 AM Guillaume Lederrey
wrote:
On Fri, Nov 15, 2019 at 12:49 AM Denny Vrandečić
wrote:
> Just wondering, is there a way to let volunteers look into the issue? (I
> guess no because it would give potentially access to the query stream, but
> maybe the answer is more optimistic)
>
There are ways, none of them easy. There are pr
Just wondering, is there a way to let volunteers look into the issue? (I
guess no because it would give potentially access to the query stream, but
maybe the answer is more optimistic)
On Thu, Nov 14, 2019 at 2:39 PM Thad Guidry wrote:
> In the enterprise, most folks use either Java Mission Cont
In the enterprise, most folks use either Java Mission Control, or just Java
VisualVM profiler. Seeing sleeping Threads is often good to start with,
and just taking a snapshot or even Heap Dump when things are really
grinding slow would be useful, you can later share those snapshots/heap
dump with
Hello!
Thanks for the suggestions!
On Thu, Nov 14, 2019 at 5:02 PM Thad Guidry wrote:
> Is the Write Retention Queue adequate?
> Is the branching factor for the lexicon indices too large, resulting in a
> non-linear slowdown in the write rate over tim?
> Did you look into Small Slot Optimizatio
Is the Write Retention Queue adequate?
Is the branching factor for the lexicon indices too large, resulting in a
non-linear slowdown in the write rate over tim?
Did you look into Small Slot Optimization?
Are the Write Cache Buffers adequate?
Is there a lot of Heap pressure?
Is the MemoryManager hav
Thanks for the feedback!
On Thu, Nov 14, 2019 at 11:11 AM wrote:
>
> Besides waiting for the new updater, it may be useful to tell us, what
> we as users can do too. It is unclear to me what the problem is. For
> instance, at one point I was worried that the many parallel requests to
> the SPARQ
As the Wikitech WDQS Hardware section [1] explains, “due to how we route
traffic with GeoDNS, the primary cluster (usually eqiad) sees most of
the traffic.” So the clusters may all have the same hardware, but one
cluster sees most of the query load, so it has a harder time keeping up
with updates (
Besides waiting for the new updater, it may be useful to tell us, what
we as users can do too. It is unclear to me what the problem is. For
instance, at one point I was worried that the many parallel requests to
the SPARQL endpoint that we make in Scholia is a problem. As far as I
understand
Hello all!
As you've probably noticed, the update lag on the public WDQS endpoint [1]
is not doing well [2], with lag climbing to > 12h for some servers. We are
tracking this on phabricator [3], subscribe to that task if you want to
stay informed.
To be perfectly honest, we don't have a good shor
11 matches
Mail list logo