JanZerebecki added a comment.
[wikidatawiki]> select change_id, change_object_id, change_revision_id from
wb_changes where LENGTH(change_info) > 65500 ;
+---+--++
| change_id | change_object_id | change_revision_id |
+---+--+--
hoo added a comment.
After looking at the code thoroughly, I think this warning is just another
symptom, but not the cause of the troubles we see.
TASK DETAIL
https://phabricator.wikimedia.org/T108130
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To:
gerritbot added a comment.
Change 230273 merged by jenkins-bot:
Hack: Don't write change rows where LENGTH(change_info) > 65500
https://gerrit.wikimedia.org/r/230273
TASK DETAIL
https://phabricator.wikimedia.org/T108130
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/ema
gerritbot added a comment.
Change 230273 had a related patch set uploaded (by Hoo man):
Hack: Don't write change rows where LENGTH(change_info) > 65500
https://gerrit.wikimedia.org/r/230273
TASK DETAIL
https://phabricator.wikimedia.org/T108130
EMAIL PREFERENCES
https://phabricator.wikimedi
JanZerebecki added a comment.
From quickly skimming the relevant code, my estimate is that we don't need to
increase rc_params, because we explicitly omit also adding the content of
change_info into it.
So that we know which items this affected:
[wikidatawiki]> select change_object_id, chang
hoo added a comment.
I poked at this some more and iterated over all changes currently in the DB
using very similar code to what we use in Wikibase and the change ids mentioned
above are the only ones failing.
TASK DETAIL
https://phabricator.wikimedia.org/T108130
EMAIL PREFERENCES
https:/
hoo added a comment.
In https://phabricator.wikimedia.org/T108130#1514967, @JanZerebecki wrote:
> Perhaps make it LONGBLOB so that it can hold more than the entity that is
> stored in a MEDIUMBLOB.
Mh... maybe. Also, do we need to resize the rc_params field as well?
TASK DETAIL
https://pha
gerritbot added a comment.
Change 229741 merged by jenkins-bot:
Log when we fail unserializeInfo
https://gerrit.wikimedia.org/r/229741
TASK DETAIL
https://phabricator.wikimedia.org/T108130
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: gerritbot
C
JanZerebecki added a comment.
Perhaps make it LONGBLOB so that it can hold more than the entity that is
stored in a MEDIUMBLOB.
TASK DETAIL
https://phabricator.wikimedia.org/T108130
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: JanZerebecki
Cc: g
gerritbot added a subscriber: gerritbot.
gerritbot added a comment.
Change 229741 had a related patch set uploaded (by JanZerebecki):
Log when we fail unserializeInfo
https://gerrit.wikimedia.org/r/229741
TASK DETAIL
https://phabricator.wikimedia.org/T108130
EMAIL PREFERENCES
https://phabr
JanZerebecki added a comment.
[wikidatawiki]> select length(change_info) from wb_changes where change_id in
(236998945, 236999268, 237059314, 237059627) ;
+-+
| length(change_info) |
+-+
| 65535 |
| 65535 |
|
hoo added a comment.
I investigated this a bit and I can tell that we have cut off values in that
table:
SELECT change_id FROM wb_changes WHERE change_info NOT LIKE "%}" LIMIT 5;
+---+
| change_id |
+---+
| 236998945 |
| 236999268 |
| 237059314 |
| 237059627 |
+
hoo added a comment.
In https://phabricator.wikimedia.org/T108130#1513436, @JanZerebecki wrote:
> I don't see this in logstash for wm1017, but do see other warnings. Perhaps
> because it is a manually started cli hhvm process.
>
> Should we try something like for each row in wb_changes do
> jso
JanZerebecki added a comment.
I don't see this in logstash for wm1017, but do see other warnings. Perhaps
because it is a manually started cli hhvm process.
Should we try something like for each row in wb_changes do
json_decode(change_info), batched by 500? Would only take 700 and some
iterati
hoo added a comment.
In https://phabricator.wikimedia.org/T108130#1513400, @JanZerebecki wrote:
> This happens only on zend. This error can not be found in logstash as it
> seems that warnings from zend do not get sent there. So I assume this means
> it happened on terbium.
It also happened i
JanZerebecki added a comment.
This happens only on zend. This error can not be found in logstash as it seems
that warnings from zend do not get sent there. So I assume this means it
happened on terbium.
From my reading this would happen if we get from the DB a value that is either
null or the
16 matches
Mail list logo