[SMW-devel] Status SMW, release plans for 1.5.0

2010-02-12 Thread Harwarth, Stefan (Bundeswehr)
> > Another oddity is that when using Spezial:Semantische Suche 
> (is that  
> > Special:Ask? I never used it before...) it always prepends 
> the Query 
> > part  with "broadtable20" after clicking the "Finde 
> Ergebnisse" Button 
> > - and  this is even independent of asking for a Category or 
> a single page.
> 
> This is strange and could hint at a URL encoding problem. Can 
> you reproduce the problem on sandbox.semantic-mediawiki.org? 
> The code for Special_Ask was recently extended by Yaron, so 
> maybe he has an idea if this could be related.

Yep, there seems to be a problem with the URL encoding of the p[]-array
and, no, I cannot reproduce it in the Sandbox:

Sandbox URL: 
http://sandbox.semantic-mediawiki.org/wiki/Special:Ask?title=Special%3AA
sk&q=[[Category%3ASMW+unit+tests]]&po=&sort_num=&order_num=ASC&eq=yes&p[
format]=broadtable&p[limit]=20&p[headers]=&p[mainlabel]=&p[link]=&p[intr
o]=&p[outro]=&p[default]=&eq=yes#

My MW 1.15 with SMW 1.5 with Firefox:
http://suzwiki.man.m.dasa.de/smwtest/index.php?title=Spezial:Semantische
_Suche&q=[[Kategorie%3AProdukt]]&order_num=ASC&eq=yes&p[]=broadtable&p[]
=20&p[]=&p[]=&p[]=&p[]=&p[]=&p[]=

My MW 1.15 with SMW 1.5 with IE 6:
http://tor.suzwiki.man.m.dasa.de/torw/index.php?title=Spezial:Semantisch
e_Suche&q=%5B%5BKategorie%3AProdukt%5D%5D&sc=0&eq=yes&p=format%3Dul

My MW 1.15 with SMW 1.4.3:
http://tor.suzwiki.man.m.dasa.de/torw/index.php?title=Spezial:Semantisch
e_Suche&q=%5B%5BKategorie%3AProdukt%5D%5D&sc=0&eq=yes&p=format%3Dul

Seems to me that the p[]-array indices are missing.


> > Whenever I put an #ask-query on a page that queries for all 
> pages in a  
> > category, then all I get is an empty page (not even the 
> output around 
> > the  query is printed any more).
> 
> This is usually seen when some memory issue occurs, e.g. due 
> to more source code being loaded now. You can try to increase 
> the PHP memory limit in LocalSettings.php (where requirements 
> above 50M are unusual even if you use normal-sized queries; I 
> found it needed on some wikis that create very long/broad 
> result tables but basic operation should usually work with 20M).

Not so much more information on that issue, I traced the results of SMW
back to doAsk() and they are ok. 

At the moment I think it might be more of an issue with the character
set in my DB and Mediawiki .. maybe that is even the source for the
first problem


Regards, Stefan

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Semediawiki-devel mailing list
Semediawiki-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/semediawiki-devel


[SMW-devel] SMW scalability

2010-02-12 Thread Robert Murphy
Coders,

I am not a coder.  I'm not even any good at server maintenance.  But SMW is
taking my site down several times a day now.  My wiki is either the biggest,
or nearly the biggest SMW wiki (according to
http://www.semantic-mediawiki.org/wiki/Sites_using_Semantic_MediaWiki) with
250,000 pages.  My site runs out of memory and chokes all the time.  I
looked in /var/log/messages and it is full of things like

httpd: PHP Fatal error:  Out of memory (allocated 10747904) (tried to
allocate 4864 bytes) in
/home/reformedword/public_html/includes/AutoLoader.php on line 582

but the php file in question is different every time.  I'm getting one of
these kind of errors every half hour or more.
Before you say, "Up your PHP memory", know that I did!  I went up from 64MB
to 128MB to 256MB.  Same story.  So I switched to babysitting "top -cd2".
When I change a page without semantic data, HTTPD and MYSQLD requests come,
linger and go.  But when I change a page with Semantic::Values, the HTTPD
and MYSQLD processes take a VERY long time to die, sometimes never.
Eventually the site runs out of memory.

Like I said, php.ini has 128MB memory and 60 second timeout for mysql.
apache has a 60 second timeout too.  Any help?

-Robert
--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Semediawiki-devel mailing list
Semediawiki-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/semediawiki-devel


Re: [SMW-devel] SMW scalability

2010-02-12 Thread Marco Mauritczat
Hi Robert,
I am also still new to SMW, but did you also adjust your settings in
MediaWikis LocalSettings.php? Try to set the line 
Ini_set( 'memory_limit', '32M'); 
to some higher value.

Greetings
Marco

-Original Message-
From: Robert Murphy [mailto:mrandmrsmur...@gmail.com] 
Sent: Friday, February 12, 2010 11:39 AM
To: Semantic MediaWiki Developers List
Subject: [SMW-devel] SMW scalability

Coders,

I am not a coder.  I'm not even any good at server maintenance.  But SMW is
taking my site down several times a day now.  My wiki is either the biggest,
or nearly the biggest SMW wiki (according to
http://www.semantic-mediawiki.org/wiki/Sites_using_Semantic_MediaWiki) with
250,000 pages.  My site runs out of memory and chokes all the time.  I
looked in /var/log/messages and it is full of things like

httpd: PHP Fatal error:  Out of memory (allocated 10747904) (tried to
allocate 4864 bytes) in
/home/reformedword/public_html/includes/AutoLoader.php on line 582

but the php file in question is different every time.  I'm getting one of
these kind of errors every half hour or more.
Before you say, "Up your PHP memory", know that I did!  I went up from 64MB
to 128MB to 256MB.  Same story.  So I switched to babysitting "top -cd2".
When I change a page without semantic data, HTTPD and MYSQLD requests come,
linger and go.  But when I change a page with Semantic::Values, the HTTPD
and MYSQLD processes take a VERY long time to die, sometimes never.
Eventually the site runs out of memory.

Like I said, php.ini has 128MB memory and 60 second timeout for mysql.
apache has a 60 second timeout too.  Any help?

-Robert

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Semediawiki-devel mailing list
Semediawiki-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/semediawiki-devel


[SMW-devel] intro

2010-02-12 Thread Lydia Pintscher
Hi,

I just wanted to quickly introduce myself. I'm Lydia and working with
ontorpise now to open up the development process and build a community
around SMW+. This of course includes working with everyone of you.
If you have questions, feedback or the like please don't hesitate to
email me. I'm also on IRC in #semantic-mediawiki. My nick is
Nightrose.
Looking forward to working with everyone.


Cheers
Lydia

-- 
Lydia Pintscher
ontoprise GmbH - know how to use Know-how

Amalienbadstraße 36 (Raumfabrik 29); 76227 Karlsruhe
eMail: pintsc...@ontoprise.de;  www: http://www.ontoprise.de
Sitz der Gesellschaft: Amtsgericht Mannheim, HRB 109540
Geschäftsführer: Prof. Dr. Jürgen Angele, Dipl.Wi.-Ing. Hans-Peter Schnurr

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Semediawiki-devel mailing list
Semediawiki-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/semediawiki-devel


[SMW-devel] Documentation to SMW

2010-02-12 Thread Marco Mauritczat
Hi,

I wonder if there's any other documentation to SMW but
http://www.semantic-mediawiki.org/doc/ ? I'm thinking of some code examples,
maybe even UML-Diagrams.

All I'm trying to do, is to query data from SMW like the Special:Ask page
does and get the results in an array like result[number][property][value].
I produced some code for that problem, but it is a very dirty solution as I
am not using SMW-functions all the time. I am pretty certain that there is a
much more elegant way, probably even an integrated function of SMW. Could
any more exprienced SMW-Coder give me some help with that?

Greetings
Marco

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Semediawiki-devel mailing list
Semediawiki-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/semediawiki-devel


Re: [SMW-devel] SMW scalability

2010-02-12 Thread Robert Murphy
It's commented out, so the system is going with what's in php.ini, right?

On Fri, Feb 12, 2010 at 4:55 AM, Marco Mauritczat  wrote:

> Hi Robert,
> I am also still new to SMW, but did you also adjust your settings in
> MediaWikis LocalSettings.php? Try to set the line
> Ini_set( 'memory_limit', '32M');
> to some higher value.
>
> Greetings
> Marco
>
> -Original Message-
> From: Robert Murphy [mailto:mrandmrsmur...@gmail.com]
> Sent: Friday, February 12, 2010 11:39 AM
> To: Semantic MediaWiki Developers List
> Subject: [SMW-devel] SMW scalability
>
> Coders,
>
> I am not a coder.  I'm not even any good at server maintenance.  But SMW is
> taking my site down several times a day now.  My wiki is either the
> biggest,
> or nearly the biggest SMW wiki (according to
> http://www.semantic-mediawiki.org/wiki/Sites_using_Semantic_MediaWiki)
> with
> 250,000 pages.  My site runs out of memory and chokes all the time.  I
> looked in /var/log/messages and it is full of things like
>
> httpd: PHP Fatal error:  Out of memory (allocated 10747904) (tried to
> allocate 4864 bytes) in
> /home/reformedword/public_html/includes/AutoLoader.php on line 582
>
> but the php file in question is different every time.  I'm getting one of
> these kind of errors every half hour or more.
> Before you say, "Up your PHP memory", know that I did!  I went up from 64MB
> to 128MB to 256MB.  Same story.  So I switched to babysitting "top -cd2".
> When I change a page without semantic data, HTTPD and MYSQLD requests come,
> linger and go.  But when I change a page with Semantic::Values, the HTTPD
> and MYSQLD processes take a VERY long time to die, sometimes never.
> Eventually the site runs out of memory.
>
> Like I said, php.ini has 128MB memory and 60 second timeout for mysql.
> apache has a 60 second timeout too.  Any help?
>
> -Robert
>
>
--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Semediawiki-devel mailing list
Semediawiki-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/semediawiki-devel


Re: [SMW-devel] SMW scalability

2010-02-12 Thread Ryan Lane
Robert,

Did you restart the web server after upping the memory? Those setting
won't take affect otherwise.

V/r,

Ryan Lane

On Fri, Feb 12, 2010 at 8:24 AM, Robert Murphy  wrote:
> It's commented out, so the system is going with what's in php.ini, right?
>
> On Fri, Feb 12, 2010 at 4:55 AM, Marco Mauritczat  wrote:
>>
>> Hi Robert,
>> I am also still new to SMW, but did you also adjust your settings in
>> MediaWikis LocalSettings.php? Try to set the line
>> Ini_set( 'memory_limit', '32M');
>> to some higher value.
>>
>> Greetings
>> Marco
>>
>> -Original Message-
>> From: Robert Murphy [mailto:mrandmrsmur...@gmail.com]
>> Sent: Friday, February 12, 2010 11:39 AM
>> To: Semantic MediaWiki Developers List
>> Subject: [SMW-devel] SMW scalability
>>
>> Coders,
>>
>> I am not a coder.  I'm not even any good at server maintenance.  But SMW
>> is
>> taking my site down several times a day now.  My wiki is either the
>> biggest,
>> or nearly the biggest SMW wiki (according to
>> http://www.semantic-mediawiki.org/wiki/Sites_using_Semantic_MediaWiki)
>> with
>> 250,000 pages.  My site runs out of memory and chokes all the time.  I
>> looked in /var/log/messages and it is full of things like
>>
>> httpd: PHP Fatal error:  Out of memory (allocated 10747904) (tried to
>> allocate 4864 bytes) in
>> /home/reformedword/public_html/includes/AutoLoader.php on line 582
>>
>> but the php file in question is different every time.  I'm getting one of
>> these kind of errors every half hour or more.
>> Before you say, "Up your PHP memory", know that I did!  I went up from
>> 64MB
>> to 128MB to 256MB.  Same story.  So I switched to babysitting "top -cd2".
>> When I change a page without semantic data, HTTPD and MYSQLD requests
>> come,
>> linger and go.  But when I change a page with Semantic::Values, the HTTPD
>> and MYSQLD processes take a VERY long time to die, sometimes never.
>> Eventually the site runs out of memory.
>>
>> Like I said, php.ini has 128MB memory and 60 second timeout for mysql.
>> apache has a 60 second timeout too.  Any help?
>>
>> -Robert
>>
>
>
> --
> SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
> Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
> http://p.sf.net/sfu/solaris-dev2dev
> ___
> Semediawiki-devel mailing list
> Semediawiki-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
>
>

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Semediawiki-devel mailing list
Semediawiki-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/semediawiki-devel


Re: [SMW-devel] SMW scalability

2010-02-12 Thread Robert Murphy
Does
/etc/init.d/httpd restart
do enough?  That's what I did.

On Fri, Feb 12, 2010 at 6:38 AM, Ryan Lane  wrote:

> Robert,
>
> Did you restart the web server after upping the memory? Those setting
> won't take affect otherwise.
>
> V/r,
>
> Ryan Lane
>
> On Fri, Feb 12, 2010 at 8:24 AM, Robert Murphy 
> wrote:
> > It's commented out, so the system is going with what's in php.ini, right?
> >
> > On Fri, Feb 12, 2010 at 4:55 AM, Marco Mauritczat  wrote:
> >>
> >> Hi Robert,
> >> I am also still new to SMW, but did you also adjust your settings in
> >> MediaWikis LocalSettings.php? Try to set the line
> >> Ini_set( 'memory_limit', '32M');
> >> to some higher value.
> >>
> >> Greetings
> >> Marco
> >>
> >> -Original Message-
> >> From: Robert Murphy [mailto:mrandmrsmur...@gmail.com]
> >> Sent: Friday, February 12, 2010 11:39 AM
> >> To: Semantic MediaWiki Developers List
> >> Subject: [SMW-devel] SMW scalability
> >>
> >> Coders,
> >>
> >> I am not a coder.  I'm not even any good at server maintenance.  But SMW
> >> is
> >> taking my site down several times a day now.  My wiki is either the
> >> biggest,
> >> or nearly the biggest SMW wiki (according to
> >> http://www.semantic-mediawiki.org/wiki/Sites_using_Semantic_MediaWiki)
> >> with
> >> 250,000 pages.  My site runs out of memory and chokes all the time.  I
> >> looked in /var/log/messages and it is full of things like
> >>
> >> httpd: PHP Fatal error:  Out of memory (allocated 10747904) (tried to
> >> allocate 4864 bytes) in
> >> /home/reformedword/public_html/includes/AutoLoader.php on line 582
> >>
> >> but the php file in question is different every time.  I'm getting one
> of
> >> these kind of errors every half hour or more.
> >> Before you say, "Up your PHP memory", know that I did!  I went up from
> >> 64MB
> >> to 128MB to 256MB.  Same story.  So I switched to babysitting "top
> -cd2".
> >> When I change a page without semantic data, HTTPD and MYSQLD requests
> >> come,
> >> linger and go.  But when I change a page with Semantic::Values, the
> HTTPD
> >> and MYSQLD processes take a VERY long time to die, sometimes never.
> >> Eventually the site runs out of memory.
> >>
> >> Like I said, php.ini has 128MB memory and 60 second timeout for mysql.
> >> apache has a 60 second timeout too.  Any help?
> >>
> >> -Robert
> >>
> >
> >
> >
> --
> > SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
> > Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
> > http://p.sf.net/sfu/solaris-dev2dev
> > ___
> > Semediawiki-devel mailing list
> > Semediawiki-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/semediawiki-devel
> >
> >
>
--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Semediawiki-devel mailing list
Semediawiki-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/semediawiki-devel


Re: [SMW-devel] SMW scalability

2010-02-12 Thread Markus Krötzsch
On Freitag, 12. Februar 2010, Robert Murphy wrote:
> Coders,
> 
> I am not a coder.  I'm not even any good at server maintenance.  But SMW is
> taking my site down several times a day now.  My wiki is either the
>  biggest, or nearly the biggest SMW wiki (according to
> http://www.semantic-mediawiki.org/wiki/Sites_using_Semantic_MediaWiki) with
> 250,000 pages.  My site runs out of memory and chokes all the time.  I
> looked in /var/log/messages and it is full of things like
> 
> httpd: PHP Fatal error:  Out of memory (allocated 10747904) (tried to
> allocate 4864 bytes) in
> /home/reformedword/public_html/includes/AutoLoader.php on line 582
> 
> but the php file in question is different every time.  I'm getting one of
> these kind of errors every half hour or more.
> Before you say, "Up your PHP memory", know that I did!  I went up from 64MB
> to 128MB to 256MB.  Same story.  So I switched to babysitting "top -cd2".
> When I change a page without semantic data, HTTPD and MYSQLD requests come,
> linger and go.  But when I change a page with Semantic::Values, the HTTPD
> and MYSQLD processes take a VERY long time to die, sometimes never.
> Eventually the site runs out of memory.
> 
> Like I said, php.ini has 128MB memory and 60 second timeout for mysql.
> apache has a 60 second timeout too.  Any help?

Great, finally someone has a performance-related request (I sometimes feel 
that I am the only one who is concerned about performance).

Regarding PHP, I don't think that a memory limit of more than 50MB or 
maximally 100MB can be recommended to any public site. What ever dies beyond 
this point cannot be saved. On the other hand, PHP Out of Memory issues are 
hard to track since there cause is often not the function that adds the final 
byte that uses up all memory. You have seen this in your logs.

One general thing that should be done on larger sites (actually on all sites!) 
is bytecode caching, see [1]. This significantly reduces the impact that large 
PHP files as such have on your memory requirements.

Out of mem issues usually result in blank pages that can only be edited by 
changing the URL manually to use the edit action. Finding these pages is 
crucial to track down the problem. In the context of SMW, I have seen memory 
issues when inline queries return a long list of results each of which 
contains a lot of values. This problem is worse when using templates for 
formatting, but it occurs also with tables. I have tracked down this problem 
to MediaWiki in my tests: manually writing a page with the contents produced 
by the large inline query has also used up all memory, even without SMW being 
involved. If this is the case on your wiki, then my only advise is to change 
the SMW settings to restrict the size of query outputs so that pages cannot 
become so large. If this is not the problem you have, then it is important to 
find out which pages cause the issues in your wiki. Note that problems that 
are caused by MediaWiki jobs could also appear for random pages since they are 
not depending on the page contents.


Regarding MySQL, you should activate and check the slow query logging of 
MySQL. It will create log files that show you which queries took particularly 
long. This can often be used to track down problematic queries and to do 
something to prevent them.


If you experience general site overload in a burst-like fashion then it might 
be that some over-zealous crawler is visiting your site, possibly triggering 
complicated activities. Check your Apache logs to see if you have high loads 
for certain robots or suspicious user agents, especially on special pages like 
Ask. Update your robots.txt to disallow crawlers to browse all results of an 
inline query (crawlers have been observed to do this).

-- Markus

[1] http://www.mediawiki.org/wiki/User:Robchurch/Performance_tuning



-- 
Markus Krötzsch  
* Personal page: http://korrekt.org
* Semantic MediaWiki: http://semantic-mediawiki.org
* Semantic Web textbook: http://semantic-web-book.org
--


signature.asc
Description: This is a digitally signed message part.
--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Semediawiki-devel mailing list
Semediawiki-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/semediawiki-devel


Re: [SMW-devel] SMW scalability

2010-02-12 Thread Thomas Fellows
Just as a reference, I have a wiki (mw 1.13.2, smw 1.3, php 5.2.3,
mysql 5.0.45) with ~210,000 pages and ~5.2 million properties values
(over 51 defined property), PHP memory is set to 128M, and I believe
it is using APC.  No problems in terms of memory limits being reached
-- how many properties/page and queries/page do you have? what types
of queries?

As always, Markus' suggestions are right on.

-Tom

On Fri, Feb 12, 2010 at 10:18 AM, Markus Krötzsch
 wrote:
> On Freitag, 12. Februar 2010, Robert Murphy wrote:
>> Coders,
>>
>> I am not a coder.  I'm not even any good at server maintenance.  But SMW is
>> taking my site down several times a day now.  My wiki is either the
>>  biggest, or nearly the biggest SMW wiki (according to
>> http://www.semantic-mediawiki.org/wiki/Sites_using_Semantic_MediaWiki) with
>> 250,000 pages.  My site runs out of memory and chokes all the time.  I
>> looked in /var/log/messages and it is full of things like
>>
>> httpd: PHP Fatal error:  Out of memory (allocated 10747904) (tried to
>> allocate 4864 bytes) in
>> /home/reformedword/public_html/includes/AutoLoader.php on line 582
>>
>> but the php file in question is different every time.  I'm getting one of
>> these kind of errors every half hour or more.
>> Before you say, "Up your PHP memory", know that I did!  I went up from 64MB
>> to 128MB to 256MB.  Same story.  So I switched to babysitting "top -cd2".
>> When I change a page without semantic data, HTTPD and MYSQLD requests come,
>> linger and go.  But when I change a page with Semantic::Values, the HTTPD
>> and MYSQLD processes take a VERY long time to die, sometimes never.
>> Eventually the site runs out of memory.
>>
>> Like I said, php.ini has 128MB memory and 60 second timeout for mysql.
>> apache has a 60 second timeout too.  Any help?
>
> Great, finally someone has a performance-related request (I sometimes feel
> that I am the only one who is concerned about performance).
>
> Regarding PHP, I don't think that a memory limit of more than 50MB or
> maximally 100MB can be recommended to any public site. What ever dies beyond
> this point cannot be saved. On the other hand, PHP Out of Memory issues are
> hard to track since there cause is often not the function that adds the final
> byte that uses up all memory. You have seen this in your logs.
>
> One general thing that should be done on larger sites (actually on all sites!)
> is bytecode caching, see [1]. This significantly reduces the impact that large
> PHP files as such have on your memory requirements.
>
> Out of mem issues usually result in blank pages that can only be edited by
> changing the URL manually to use the edit action. Finding these pages is
> crucial to track down the problem. In the context of SMW, I have seen memory
> issues when inline queries return a long list of results each of which
> contains a lot of values. This problem is worse when using templates for
> formatting, but it occurs also with tables. I have tracked down this problem
> to MediaWiki in my tests: manually writing a page with the contents produced
> by the large inline query has also used up all memory, even without SMW being
> involved. If this is the case on your wiki, then my only advise is to change
> the SMW settings to restrict the size of query outputs so that pages cannot
> become so large. If this is not the problem you have, then it is important to
> find out which pages cause the issues in your wiki. Note that problems that
> are caused by MediaWiki jobs could also appear for random pages since they are
> not depending on the page contents.
>
>
> Regarding MySQL, you should activate and check the slow query logging of
> MySQL. It will create log files that show you which queries took particularly
> long. This can often be used to track down problematic queries and to do
> something to prevent them.
>
>
> If you experience general site overload in a burst-like fashion then it might
> be that some over-zealous crawler is visiting your site, possibly triggering
> complicated activities. Check your Apache logs to see if you have high loads
> for certain robots or suspicious user agents, especially on special pages like
> Ask. Update your robots.txt to disallow crawlers to browse all results of an
> inline query (crawlers have been observed to do this).
>
> -- Markus
>
> [1] http://www.mediawiki.org/wiki/User:Robchurch/Performance_tuning
>
>
>
> --
> Markus Krötzsch  
> * Personal page: http://korrekt.org
> * Semantic MediaWiki: http://semantic-mediawiki.org
> * Semantic Web textbook: http://semantic-web-book.org
> --
>
> --
> SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
> Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
> http://p.sf.net/sfu/solaris-dev2dev
> ___
> Semediawiki-devel mailing list
> Semediawiki-devel@lists.sourceforge.net
> https://lists.so