Just a short reminder that our monthly office hours will take place
tomorrow at 9am UTC on irc:#wikimedia-research.
More information and how to join:
https://www.mediawiki.org/wiki/Wikimedia_Research/Office_hours
Hope to see you there.
- Martin
On Fri, Jun 19, 2020 at 9:27 AM Martin Gerlach
Hi all,
join the teams from Analytics and Research for their monthly office hours
next Wednesday, 2020-06-24 from 9.00-10.00am (UTC)*. Bring all your
research/analytics questions and ideas to discuss projects, data, analysis,
etc. To participate, please join the IRC channel: #wikimedia-research
Just a short reminder that our monthly office hours will be taking place
later today (6pm UTC) on irc:#wikimedia-research.
More info and on how to join:
https://www.mediawiki.org/wiki/Wikimedia_Research/Office_hours
Hope to see you there.
- Martin
On Fri, May 22, 2020 at 5:46 PM Martin Gerlach
Hi all,
join us for our monthly Analytics/Research Office hours next Wednesday,
2020-05-27 at 18.00-19.00 (UTC)*. Bring all your research questions and
ideas to discuss projects, data, analysis, etc…
To participate, please join the IRC channel: #wikimedia-research [1]. More
detailed information
Just a friendly reminder that our office hours will be taking place
tomorrow (Wednesday).
Hope to see you there.
- Martin
On Fri, Apr 17, 2020 at 10:09 AM Martin Gerlach
wrote:
> Hi all,
>
> join us for our monthly Analytics/Research Office hours on 2020-04-29 at
> 18.00-19.00 (UTC). Bring all
Hi all,
join us for our monthly Analytics/Research Office hours on 2020-04-29 at
18.00-19.00 (UTC). Bring all your research questions and ideas to discuss
projects, data, analysis, etc… To participate, please join the IRC channel:
#wikimedia-research [1]. More detailed information can be found
g/p/Research-Analytics-Office-hours
>
>
>
> --
> Martin Gerlach
> Research Scientist
> Wikimedia Foundation
> ___
> Analytics mailing list
> Analytics@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/analytics
Hi all,
join us for our monthly Analytics/Research Office hours on 2020-03-25 at
17.00-18.00 (UTC). Bring all your research questions and ideas to discuss
projects, data, analysis, etc…
To participate, please join the IRC channel: #wikimedia-research [1].
More detailed information can be found
Hi all,
join us for our monthly Analytics/Research Office hours on 2020-02-26 at
17.00-18.00 (UTC). Bring all your research questions and ideas to discuss
projects, data, analysis, etc…
To participate, please join the IRC channel: #wikimedia-research [1].
More detailed information can be found
Hi everybody,
as part of https://phabricator.wikimedia.org/T201165 the Analytics team
thought to reach out to everybody to make it clear that all the home
directories on the stat/notebook nodes are not backed up periodically. They
run on a software RAID configuration spanning multiple disks of
Hi everybody,
the Analytics team is planning to upgrade the Hadoop cluster to CDH 5.16.1
(changelog in https://phabricator.wikimedia.org/T218343) on Wed Apr 17th at
15:00 CET. All services (HDFS, Hive, Oozie, Notebooks, etc..) will be
unavailable for one hour if everything goes according to plan,
Hi everybody,
the Analytics team will shutdown completely the Hadoop cluster for a couple
of hours on Monday Nov 12th at 14:00 CEST to upgrade the Cloudera
distribution to 5.15 (currently 5.10). No big updates but only a collection
of small/medium fixes that (hopefully) will improve the
Hi everybody,
this is a reminder that the maintenance will happen tomorrow (Tue 25th, 10
CEST).
Luca
Il giorno ven 14 set 2018 alle ore 12:13 Luca Toscano <
ltosc...@wikimedia.org> ha scritto:
> Hi everybody,
>
> the Analytics team needs to replace the Hadoop master node hosts
>
Hi everybody,
the Analytics team needs to replace the Hadoop master node hosts
(analytics100[1,2]) and the Hive/Oozie host (analytics1003) as part of
regular hardware refresh (hosts getting out of warranty). In order to do
things safely we decided to proceed with a full cluster shutdown on Sept
Hi everybody,
as you already know we are deploying the new Linux kernel to fix the
Meltdown vulnerability across the production fleet. This means that I need
to reboot all the stat boxes (stat100[456]) and also analytics1003 (running
Oozie, Camus, Hive, etc..), probably interfering with the work
Hi again (for the last time hopefully :),
Hive back up and running fine. I'll try to write a summary of what happened
in https://phabricator.wikimedia.org/T179943 for everybody interested. The
regular Hadoop jobs were completely stopped so there was no issue with data
loss/inconsistency, only a
Hi everybody,
we are experiencing some issues with the Hive daemon, so currently Hive
queries are not available. I am going to update this thread as soon as the
issue is over.
For more info, please contact me (elukey) on IRC (#wikimedia-analytics).
Sorry for the trouble!
Luca
2017-12-06 19:47
Hi everybody,
we'd need to reboot the analytics1003 host for Linux kernel and openjdk
updates tomorrow Dec 07 at 10 AM CET. Hive and Oozie will stop for a
(hopefully) brief amount of time, but since they'll need to stop before the
reboot it might happen that in flight jobs/queries fail. We'll try
[Updating only the Analytics list]
Hi everybody,
I forgot to update this email thread last week. The Event Logging master
database switch went fine but as reported the maintenance window affected
the Eventlogging schema graphs in the Eventlogging Schema dashboard. For
example, this is how the
Hi everybody,
the Analytics team needs to do the following maintenance operations:
1) migrate the Event-Logging master db ('log', currently on db1046) to the
new host db1107 (T156844). This should happen on *Wed Nov 15th (EU morning)*,
and it should be transparent to all the Event Logging users.
ve yet to
> >> write up).
> >>
> >> However, to do a better study it would be very helpful to have slightly
> >> more information
> >> than is in the API. Specifically, it would be very useful to be able to
> >>
would be very helpful to have slightly
>> more information
>> than is in the API. Specifically, it would be very useful to be able to
>> query, for each
>> _pair_ of pages, how many people (or IP's) viewed _both_ of those pages.
>> That way I can find
>> out which pag
hard to figure out
> pageviews for
> specific pages by country rather than language.
>
> My question is, would you happen to know if is there any way to obtain
> this information?
> (does not necessarily have to be through the API.) Or do you know if there
> ar
Dear list,
I'm posting a recent conversation with Dan below, as well as a few follow-up
questions.
Dan was kind enough to point out this list. I apologize that the post is
"backward" (in
email-thread format) due to my ignorance, will use this list from now on.
Thanks, Daniel
Hi
And, we’re back up! Thanks all! Everything went smoothly.
(Big thanks to Luca for driving.)
-Andrew & Luca
On Tue, Feb 28, 2017 at 9:07 AM, Andrew Otto wrote:
> Allright! This is starting now.
>
> On Mon, Feb 27, 2017 at 3:29 PM, Andrew Otto
Allright! This is starting now.
On Mon, Feb 27, 2017 at 3:29 PM, Andrew Otto wrote:
> Just a reminder, we will be taking the Analytics Hadoop Cluster offline
> tomorrow morning EST. I’ll email again tomorrow right before we do, and
> also once it is back up.
>
>
Hi everyone,
We are planning an upgrade of the Hadoop cluster on February 28th. We need
to take the cluster down for this upgrade. The actual upgrade shouldn’t
take more than 2 hours, but we’re going to reserve the whole work day of
February 28th to do this, just in case something goes wrong.
On Wed, Jan 4, 2017 at 1:21 PM, Addshore wrote:
> The query listed selecting from wikidatawiki.wb_terms was me /
> wmde-analytics and runs daily.
>
> I agree that some sort of query limits would make sense.
> What limits are currently in place?
>
The wikidata queries I
; an interest in Wikipedia and analytics.
> *Reply To: *A mailing list for the Analytics Team at WMF and everybody
> who has an interest in Wikipedia and analytics.
> *Subject: *[Analytics] analytics-store was down due to excessive load
>
> analytics-store was brought down at 6am, and t
lytics.
> *Reply To: *A mailing list for the Analytics Team at WMF and everybody
> who has an interest in Wikipedia and analytics.
> *Subject: *[Analytics] analytics-store was down due to excessive load
>
> analytics-store was brought down at 6am, and then again at 9am UTC 25 Dec
> due to mul
and everybody who has an interest in Wikipedia and analytics.Reply To: A mailing list for the Analytics Team at WMF and everybody who has an interest in Wikipedia and analytics.Subject: [Analytics] analytics-store was down due to excessive loadanalytics-store was brought down at 6am, and then again at 9am
analytics-store was brought down at 6am, and then again at 9am UTC 25 Dec
due to multiple executions of long running queries (some of them 2 days
long) such as:
SELECT LEFT(timestamp, 8) AS yearmonthday, timestamp, userAgent, clientIp,
webHost, COUNT(*) AS copies FROM log.PageContentSaveComplete
After consulting with several people on several departments, we have
scheduled a maintenance window (downtime) for dbstore1002 (alias
analytics-store, and s2 *to* s7-analytics-slave) for Thursday 21 July
between14:00-15:00 UTC [0].
The downtime is expected to be of only 5-10 minutes, but in case
Hi again!
Forgot to mention that I have currently stopped all Analytics jobs on
Hadoop (Camus, Oozie) as preparation step for a complete Hadoop cluster
reboot to install new Linux kernel upgrades. The reboots will be performed
in small batches to limit the blast radius but it might affect your
FYI, I have created a phab ticket with this info:
https://phabricator.wikimedia.org/T138505
On Thu, Jun 23, 2016 at 5:39 PM, Nuria Ruiz wrote:
> Hello!
>
> (adding analytics@ public list)
>
>
> >Opera Mini sends different user agent if it's in Mini or Turbo mode and
> UC
Hello!
(adding analytics@ public list)
>Opera Mini sends different user agent if it's in Mini or Turbo mode and UC
Mini has a little different user agent than the rest of the UC >browser. I
did a small writeup https://wikitech.wikimedia.org/wiki/ProxyBrowsers
Thanks for doing the write up, I
Greib.
From: Analytics <analytics-boun...@lists.wikimedia.org> on behalf of
analytics-requ...@lists.wikimedia.org <analytics-requ...@lists.wikimedia.org>
Sent: Friday, May 27, 2016 18:12
To: analytics@lists.wikimedia.org
Subject: Analytics Di
Thanks Jaime!
On Thu, May 26, 2016 at 8:14 PM, Jaime Crespo wrote:
> The server seems to be back in a relatively good state; however it
> will be behind in replication both for s* shards (wiki data) and the
> eventlogging database; I would suggest to wait for a day if
Hi,all,
a few minutes ago dbstore1002, (I think you know it better as
analytics-store) was forced to have an unscheduled maintenance A.K.A
"it crashed and I am trying to give it first aid".
Please use db1047 (analytics-slave?) for now, if you can.
I will follow up with a state update once I
available on API (Nuria Ruiz)
>>>2. Hive & Oozie downtime tomorrow (Andrew Otto)
>>>3. Re: Unique Devices data available on API (Gergo Tisza)
>>>4. Re: Unique Devices data available on API (Kevin Leduc)
>>>
>>>
>>> -
-----
>>
>> Message: 1
>> Date: Tue, 19 Apr 2016 12:17:12 -0700
>> From: Nuria Ruiz <nu...@wikimedia.org>
>> To: "A mailing list for the Analytics Team at WMF and everybody who
>> has an interest in Wikipedia and analytics."
>> <analytics@lists.wikimedia.org>, Wikimedia developers
>> <wikitec...@lists.wikimedia.org>,
>> wiki-researc...@lists.wikimedia.org
>> Subject: [Analytics] Unique Devices data available on API
>> Message-ID:
>>
API (Gergo Tisza)
>4. Re: Unique Devices data available on API (Kevin Leduc)
>
>
> --
>
> Message: 1
> Date: Tue, 19 Apr 2016 12:17:12 -0700
> From: Nuria Ruiz <nu...@wikimedia.org>
> To: "A mailing list for the Analytics Team at WMF and everybody who
> has an
Hiya,
We’re ready to upgrade the Analytics Cluster to CDH 5.5. To do so, we need
to schedule a maintenance period during which we can stop all Hadoop
related services. This includes Hive, Oozie, Spark, etc.
I’d like to plan this for Tuesday February 23rd starting at 14:00 UTC
(09:00 US east
Sure,
+analytics
On Mon, Jun 8, 2015 at 5:50 PM, Adam Baso ab...@wikimedia.org wrote:
Okay to move this to the public list and remove the internal list?
On Monday, June 8, 2015, Joseph Allemandou jalleman...@wikimedia.org
wrote:
Hi,
In fact instead of using isAppPageview UDF, one should
I think I am missing some basic understanding of this parameter then. In
my understanding, if I saw 1000 pageviews total, and 999 of them were
internal referrals, then this would indicate 1 user who visited 1000 (999
by clicking on internal links). If I instead saw 1000 pageviews and none of
On Thu, Jun 4, 2015 at 5:26 AM, Dan Andreescu dandree...@wikimedia.org
wrote:
However, I'm not sure you can deduce much about the session length from
this result.
Thanks, Dan.
I think I am missing some basic understanding of this parameter then. In
my understanding, if I saw 1000 pageviews
/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https
Is it somehow possible to get number of visitors for the particular day
(for example, May 27)?
I opened relevant metrics but see no ability to get breakdown or set a
daily scale...
https://metrics.wmflabs.org/static/public/dash/#projects=ruwikimedia/metrics=DailyPageviews
Well, the pageviews data will very deliberately /not/ contain any data
from the chapters' wikis. So...
The link I sent (this one
https://metrics.wmflabs.org/static/public/dash/#projects=ruwikimedia/metrics=DailyPageviews%20(webstatscollector))
shows the data because that aggregator is running
/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
--
Oliver Keyes
Research Analyst
Wikimedia Foundation
___
Analytics mailing list
Gotcha :)
On 27 May 2015 at 12:17, Dan Andreescu dandree...@wikimedia.org wrote:
Well, the pageviews data will very deliberately /not/ contain any data
from the chapters' wikis. So...
The link I sent (this one) shows the data because that aggregator is running
on the webstatscollector
Hi, guys!
I was advised to write to the list and look for help here :)
The situation itself is explained here:
https://phabricator.wikimedia.org/T91963
We have the site of our chapter, Wikimedia RU, located on WMF servers and
that's why we have limited access to the management of the site.
We
restarted, and tmp space expanded.
It's back up now.
Sean
--
DBA @ WMF
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
--
DBA @ WMF
Hi
analytics-store tmp space filled up today with many large temporary
tables (it was ~32G) from many slow research queries. Those had to be
killed, the database process restarted, and tmp space expanded.
It's back up now.
Sean
--
DBA @ WMF
___
://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
Just a heads-up:
Analytics-store is seeing several hours of replag on s1, s4, and s6.
s4 is me doing a commonswiki schema change, which should be done
shortly. s1 and s6 are lagging due to load from queries like:
create table staging.enwiki_intra select a.pl_from as page_id_from,
a.pl_title as
/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
Yesterday at the MW Summit I mentioned to Dan/Nuria/Halfak that
analytics-store disk was ~20% used. JFTR that was wrong; it's actually
closer to 45% used.
It's a 6T RAID10 10K array holding S1-7, Eventlogging, Staging, and the
Data Warehouse test schema. Still enough space for a good while at
Following up on the last sprint by the Analytics Engineering team in 2014:
The team met all its commitments and even took on a few more tasks. There
were no sprints or showcases over the holidays. Our next showcase will be
Tuesday January 13, 2015, and I will forward the slide deck after the
Had to kill queries, lest analytics-store grind to a halt and take even
longer to recover.
These ones: https://gerrit.wikimedia.org/r/#/c/178381/
On Sun, Dec 21, 2014 at 8:09 PM, Sean Pringle sprin...@wikimedia.org
wrote:
This last few days analytics-store replication has started to lag by
this is mainly a heads-up email. Let us know if ops should kill
stuff to let the box catch up again.
BR
Sean
--
DBA @ WMF
--
DBA @ WMF
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
/mailman/listinfo/analytics
https://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
There is over six months of data in MobileWebClickTracking_5929948.
Maryana, maybe the older data could be purged as well? That'd probably
speed up the queries.
Dan
On 22 December 2014 at 09:29, Maryana Pinchuk mpinc...@wikimedia.org
wrote:
Thanks for the heads up -- the
@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
This last few days analytics-store replication has started to lag by some
hours. Currently s1 (enwiki) and s5 (dewiki, wikidatawiki) are most
affected. Eventlogging is not lagging, due to the nicely batched writes it
does now-a-days :-)
There are many slow queries running from the research user
Hi,
On Thu, Dec 11, 2014 at 03:02:37PM -0800, Kevin Leduc wrote:
[...]
Burndown Chart
https://phabricator.wikimedia.org/sprint/view/935/
Sprint Board
https://phabricator.wikimedia.org/sprint/board/935/query/all/
Both of those URLs ask me to sign in.
Is this on purpose?
If not, could
On Mon, 2014-12-15 at 19:40 +0100, Christian Aistleitner wrote:
https://phabricator.wikimedia.org/sprint/view/935/
https://phabricator.wikimedia.org/sprint/board/935/query/all/
Both of those URLs ask me to sign in.
Confirming. I've filed https://phabricator.wikimedia.org/T78585 (feel
free to
Hello,
It has been a while since the last email of this kind. The team continued
it’s bi-weekly sprints around Columbus day, US Thanksgiving and through the
switch from bugzilla to Phabricator. We have now re-organized our
processes around phabricator and are excited to see how this tool will
https://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
Niklas,
Can you answer this question from Nuria?
jsahleen: does beta have its own varnish instance? where are you posting your
events in beta? can you send teh url?
Also would it be possible to document the steps you used when testing EL on
beta so that others can reproduce them?
Thanks,
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman
Hi,
Ops changed the password for the research user on analytics-store (I
don't know what the context is, I just saw the commit summary on the Ops
ML), which our team was using to generate tsvs for limn. Maybe it wasn't
the right user to be using, I'm not sure, but one way or another we're
going
Hello,
The Analytics Development Team kicked off a sprint this morning. You can
follow here:
http://sb.wmflabs.org/t/analytics-developers/2014-10-16/
The theme for this sprint is fixing the metrics.
BugID
Component
Summary
Points
71255 https://bugzilla.wikimedia.org/show_bug.cgi?id=71255
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo
/listinfo/analytics
--
Dan Garry
Associate Product Manager, Mobile Apps
Wikimedia Foundation
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Hi Pine,
as usual, not speaking for the Foundation, just writing down my own 2
cents.
On Sun, Oct 12, 2014 at 05:49:11PM -0700, Pine W wrote:
I've noticed in my time on this list that it seems like the reliability of
Wikimedia Analytics services is a bit spotty.
I can see that this impression
Christian Aistleitner, 13/10/2014 11:49:
Hence, more things get brought to the list than would surface on
targeted lists or for closed down shops.
+1 I think recently information quality on this list has increased a
lot. If one looks only at public mailing lists archives (as opposed to
On Oct 12, 2014 8:49 PM, Pine W wiki.p...@gmail.com wrote:
I've noticed in my time on this list that it seems like the reliability
of Wikimedia Analytics services is a bit spotty. I've worked in IT
services, and our web and email servers' reliability seemed pretty good,
comparable to the
://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
--
Oliver Keyes
Research Analyst
Wikimedia Foundation
On Oct 12, 2014 9:02 PM, Jeremy Baron jer...@tuxmachine.com wrote:
Some services (e.g. wikimetrics, reportcard) run on labs so they inherent
the lower uptime expectations of labs vs. prod.
s/inherent/inherit/
___
Analytics mailing list
Result:
The team completed 34 of 57 points and is close to completing the 21 point
story #70887. The remaining story to implement the metric “Rolling
recurring old active editors” remains more problematic with performance
issues.
Please note, the team is getting together on-site in San
Hi,
The fruits of our labor on Editor Engagement Vital Signs (EEVS) is on
display. This is still an early release, we have a backlog of feedback
from internal stakeholders and more iterations are to come.
https://metrics.wmflabs.org/static/public/dash/
This sprint’s commitments are:
Bug ID
The team completed 55 out of 55 points! Go team. Here are the slides from
the showcase:
https://docs.google.com/presentation/d/1y54uF5PkYc9Sa7VWOykKXQ4DXqh_n3VxDMAR2-CCKss/edit?usp=sharing
Cheers,
Kevin Leduc
On Thu, Sep 4, 2014 at 5:38 PM, Kevin Leduc ke...@wikimedia.org wrote:
Hi,
The
Hi,
the dev team has committed to the following user stories for the sprint
starting today, ending August 5.
Bug ID
Component
Summary
Points
68516
Wikimetrics
Story: Researcher has prototype for wikimania
8
Total Points: 8
You can see the sprint here:
on them. The dev team takes this background noise
into account when committing to in a sprint.
Regards,
Kevin Leduc
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
On Thu, 2014-07-17 at 19:52 -0400, Dan Andreescu wrote:
It looks to me like people are using the analytics keyword to
basically cross-post bugs from other components. I wasn't aware of
it and in my opinion I'd rather some analytics people be cc-ed on the
bug rather than the keyword. Let's do
On Wed, 2014-07-23 at 00:14 +0200, Federico Leva (Nemo) wrote:
Andre Klapper, 22/07/2014 23:20:
Alright, updated http://fab.wmflabs.org/T423 to state that we will get
rid of that keyword.
Can you define get rid of?
Drop the keyword from those tickets when migrating Bugzilla keywords to
Heja Analytics crew,
Wikimedia Bugzilla has an analytics keyword (apart from the Analytics
product).
See https://bugzilla.wikimedia.org/buglist.cgi?keywords=analytics for
its 67 open tickets currently.
As we're slowly planning for the post-Bugzilla shiny new Phabricator
world: Does anybody
Bugwrangler
http://blogs.gnome.org/aklapper/
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
Back online and caught up.
On Wed, Jul 9, 2014 at 11:00 PM, Sean Pringle sprin...@wikimedia.org
wrote:
FYI, I just managed to crash the TokuDB storage engine on analytics-store
MariaDB while replicating some schema changes for mediawiki.
It's busy recovering from the transaction log now.
Hello,
Just a brief note to let everyone know that the analytics team is hiring,
if you have an an interested in analytics, Wikipedia and its sister
projects we would love to hear from you.
Check our positions and apply:
https://www.mediawiki.org/wiki/Analytics/Research_and_Data#Open_positions
Hi Toby, Hi Kevin,
the below Email hasn't seen a response during this week, and the
respective pages have seen no updates.
I have no clue which parts of our work you want to consider separate
components/projects, or whom you'd consider to be project lead and so
on. Hence, I did not update the
I took a look at this and realized we needed to talk about it. I'm not really
sure what primary and secondary means. Let's discuss in CH.
Thanks for the ping.
On May 3, 2014, at 8:14 AM, Christian Aistleitner
christ...@quelltextlich.at wrote:
Hi Toby, Hi Kevin,
the below Email hasn't
/analytics
___
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
This could use some updating to reflect current team roles/membership:
https://www.mediawiki.org/wiki/Developers/Maintainers#Analytics
https://www.mediawiki.org/wiki/Analytics#Projects
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
Hi,
On Mon, Apr 28, 2014 at 04:15:33PM -0700, Toby Negrin wrote:
I'm speaking with Jeff Gage and he has been working with
Chase on the db1047.
Thanks Jeff, Chase, and Toby!
The analytics s1 slave is up and working again.
The EventLogging tables are still having the same problems around the
1 - 100 of 104 matches
Mail list logo