Re: question regarding mysql database location

2009-11-24 Thread Manasi Save
Also I forgot to mention that I have gone through the innodb option of
innodb_data_file_path but I can just specify it as :

innodb_data_file_path=ibdata1:2048M:autoextend:max:1024M;ibdata1:2048M:autoextend:max:1024M;

But not as :

innodb_data_file_path=/var/lib/mysql/data/ibdata1:2048M:autoextend:max:1024M;/var/lib/mysql/data1/ibdata1:2048M:autoextend:max:1024M;

Is there any wayout for this?

Thanks and Regards,
Manasi Save
Artificial Machines Pvt Ltd.

> Hi All,
>
> I have asked this question before But, I think I am not able to describe
> it better.
>
> Sorry for asking it again.
> I have multiple databases but there is a limit on the folders getting
> created in one folder.
>
> I have mysql default directory set as /var/lib/mysql/data.
> Now, After 32000 folder creation I am not able to create more folders than
> that. Well Its not like I want to create 32000 database's in it (Which I
> wanted to earlier :-P).
>
> for example - I want to create 10 databases but 5 in
> /var/lib/mysql/data/d1 to d5
> and othe 5 in /var/lib/mysql/data/d6 to d10.
>
> but I want to access all the databases that is d1-d10.
>
> as I ca change the database location after 5 databases but not able to
> access old five which I have created in old location.
>
>
> Please let me know if anymore information is needed on this. I am really
> looking for the solution. Please Help me.
> --
> Thanks and Regards,
> Manasi Save
> Artificial Machines Pvt Ltd.
>
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=manasi.s...@artificialmachines.com
>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



question regarding mysql database location

2009-11-24 Thread Manasi Save
Hi All,

I have asked this question before But, I think I am not able to describe
it better.

Sorry for asking it again.
I have multiple databases but there is a limit on the folders getting
created in one folder.

I have mysql default directory set as /var/lib/mysql/data.
Now, After 32000 folder creation I am not able to create more folders than
that. Well Its not like I want to create 32000 database's in it (Which I
wanted to earlier :-P).

for example - I want to create 10 databases but 5 in
/var/lib/mysql/data/d1 to d5
and othe 5 in /var/lib/mysql/data/d6 to d10.

but I want to access all the databases that is d1-d10.

as I ca change the database location after 5 databases but not able to
access old five which I have created in old location.


Please let me know if anymore information is needed on this. I am really
looking for the solution. Please Help me.
-- 
Thanks and Regards,
Manasi Save
Artificial Machines Pvt Ltd.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to concatenate a constant to an int?

2009-11-24 Thread Michael Dykman
untested, but you are looking for something like this (formatted for humans):

select
   concat(
  ifnull(
if(l.client_id,'L',null),
ifnull(
if(p.prospect_id,'P',null),
'C')),
   c.contact_id) as reference_number,
from contact c
left join prospect p on c.contact_id = p.prospect_id
left join client l on p.prospect_id = l.client_id

- md


ifnull( if(l.client_id,'L',null),
   ifnull(if(p.prospect_id,'P',null),'C'))


On Tue, Nov 24, 2009 at 8:43 PM, Neil Aggarwal  wrote:
>>         concat('C',c.contact_id) as ref
>
> That worked.  Thanks the the tip.
>
> Now, lets say I have three tables:
>
> contact
>        contact_id int
>
> prospect
>        prospect_id int
>
> client
>        client_id int
>
>
> If a contact is a prospect, it will have a line in
> both the contact and prospect table, with the same
> id value.
>
> If a contact is a client, it will have a line in
> the contact, prospect, and client table, all with
> the same id value.
>
> For example:
>
>        contact_id 1
>
>        contact_id 2
>        prospect_id 2
>
>        contact_id 3
>        prospect_id 3
>        client_id 3
>
> I want the ref numbers to be:
>        C1
>        P2
>        L3
>
> Is there a way to use a query to do that?
>
> Something like:
>
> create or replace view
> view_AllData as
> select
>        concat('C' or 'P' or 'L',c.contact_id) as reference_number,
> from contact c
> left join prospect p on c.contact_id = p.prospect_id
> left join client l on p.prospect_id = l.client_id
>
> Any ideas how to do this?
>
> Thanks
>        Neil
>
> --
> Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net
> Host Joomla!, Wordpress, phpBB, or vBulletin for $25/mo
> Unmetered bandwidth = no overage charges, 7 day free trial
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com
>
>



-- 
 - michael dykman
 - mdyk...@gmail.com

"May you live every day of your life."
Jonathan Swift

Larry's First Law of Language Redesign: Everyone wants the colon.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: How to concatenate a constant to an int?

2009-11-24 Thread Neil Aggarwal
> concat('C',c.contact_id) as ref

That worked.  Thanks the the tip.

Now, lets say I have three tables:

contact
contact_id int

prospect
prospect_id int

client
client_id int


If a contact is a prospect, it will have a line in
both the contact and prospect table, with the same
id value.

If a contact is a client, it will have a line in
the contact, prospect, and client table, all with
the same id value.

For example:

contact_id 1

contact_id 2
prospect_id 2

contact_id 3
prospect_id 3
client_id 3

I want the ref numbers to be:
C1
P2
L3

Is there a way to use a query to do that?

Something like:

create or replace view
view_AllData as
select 
concat('C' or 'P' or 'L',c.contact_id) as reference_number,
from contact c
left join prospect p on c.contact_id = p.prospect_id
left join client l on p.prospect_id = l.client_id

Any ideas how to do this?

Thanks
Neil

--
Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net
Host Joomla!, Wordpress, phpBB, or vBulletin for $25/mo
Unmetered bandwidth = no overage charges, 7 day free trial 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to concatenate a constant to an int?

2009-11-24 Thread Michael Dykman
 create or replace view
 view_AllData as
 select
c.name,
concat('C',c.contact_id) as ref
 from contact c


On Tue, Nov 24, 2009 at 6:52 PM, Neil Aggarwal  wrote:
> Hello:
>
> This seems like it should be simple, but I am having trouble
> figuring it out.
>
> I have a table contact which has:
>        name            String
>        contact_id      int
>
> Lets assume the contact table has this row:
>        name: Neil Aggarwal
>        contact_id: 1
>
> I want to create a view that has this data
>        name: Neil Aggarwal
>        ref: C1
>
> I did this:
>
> create or replace view
> view_AllData as
> select
>        c.name
>        ('C'+c.contact_id) as ref,
> from contact c
>
> but the ref does not have the leading 'C' in
> the front of it.
>
> Any ideas?
>
> Thanks,
>        Neil
>
> --
> Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net
> Host Joomla!, Wordpress, phpBB, or vBulletin for $25/mo
> Unmetered bandwidth = no overage charges, 7 day free trial
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com
>
>



-- 
 - michael dykman
 - mdyk...@gmail.com

"May you live every day of your life."
Jonathan Swift

Larry's First Law of Language Redesign: Everyone wants the colon.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



How to concatenate a constant to an int?

2009-11-24 Thread Neil Aggarwal
Hello:

This seems like it should be simple, but I am having trouble
figuring it out.

I have a table contact which has:
nameString
contact_id  int

Lets assume the contact table has this row:
name: Neil Aggarwal
contact_id: 1

I want to create a view that has this data
name: Neil Aggarwal
ref: C1

I did this:

create or replace view
view_AllData as
select 
c.name
('C'+c.contact_id) as ref,
from contact c

but the ref does not have the leading 'C' in
the front of it.

Any ideas?

Thanks, 
Neil

--
Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net
Host Joomla!, Wordpress, phpBB, or vBulletin for $25/mo
Unmetered bandwidth = no overage charges, 7 day free trial


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Specific benchmarking tool

2009-11-24 Thread Johan De Meersman
Yeah, I figured that out in the mean time :-) I was putting the log type
right after --split, and the damn thing doesn't think of throwing an
'unknown field' error :-)

It's working now, thanks a lot !

On Tue, Nov 24, 2009 at 8:27 PM, ewen fortune wrote:

> Johan,
>
> Yes, there are built in parsers for different formats, for example I
> was using the general log.
> mk-log-player --split Thread_id --type genlog
>
> (genlog was added the other day and is only in trunk so far)
>
> http://www.maatkit.org/doc/mk-log-player.html
>
> --type
>
>type: string; group: Split
>
>The type of log to --split (default slowlog). The permitted types are
>
>binlog
>
>Split a binary log file.
>slowlog
>
>Split a log file in any varation of MySQL slow-log format.
>
> Cheers,
>
> Ewen
>
> On Tue, Nov 24, 2009 at 2:41 PM, Johan De Meersman 
> wrote:
> > Ewen,
> >
> > Do you need a specific log format or setting ? I'm debugging the tool,
> and
> > it uses ";\n#" as record separator, which is entirely not consistent with
> > the log format I get out of the mysql log. Does it perchance try to parse
> > zero-execution-time slowlogs instead of the regular log ?
> >
> >
> > On Sat, Nov 14, 2009 at 1:23 AM, Johan De Meersman 
> > wrote:
> >>
> >> hmm, I got segfaults. i,ll check after the weekend.
> >>
> >> On 11/13/09, ewen fortune  wrote:
> >> > Johan,
> >> >
> >> > What does? mk-log-player? - I just used it to split and play back 8G,
> >> > no problem.
> >> >
> >> >
> >> > Ewen
> >> >
> >> > On Fri, Nov 13, 2009 at 6:20 PM, Johan De Meersman <
> vegiv...@tuxera.be>
> >> > wrote:
> >> >> It seems to have a problem with multi-gigabyte files :-D
> >> >>
> >> >> On Fri, Nov 13, 2009 at 5:35 PM, Johan De Meersman <
> vegiv...@tuxera.be>
> >> >> wrote:
> >> >>>
> >> >>> Ooo, shiny ! Thanks, mate :-)
> >> >>>
> >> >>> On Fri, Nov 13, 2009 at 4:56 PM, ewen fortune <
> ewen.fort...@gmail.com>
> >> >>> wrote:
> >> 
> >>  Johan,
> >> 
> >>  The very latest version of mk-log-player can do that.
> >>  If you get the version from trunk:
> >> 
> >>  wget http://www.maatkit.org/trunk/mk-log-player
> >> 
> >>  mk-log-player --split Thread_id --type genlog
> >> 
> >>  Cheers,
> >> 
> >>  Ewen
> >> 
> >>  On Fri, Nov 13, 2009 at 4:33 PM, Johan De Meersman
> >>  
> >>  wrote:
> >>  > Hey all,
> >>  >
> >>  > I'm looking for a Mysql benchmarking/stresstesting tool that can
> >>  > generate a
> >>  > workload based on standard Mysql full query log files. The idea
> is
> >>  > to
> >>  > verify
> >>  > performance of real production loads on various database setups.
> >>  >
> >>  > Does anyone know of such a tool, free or paying ?
> >>  >
> >>  > Thx,
> >>  > Johan
> >>  >
> >> >>>
> >> >>
> >> >>
> >> >
> >
> >
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>
>


Re: Specific benchmarking tool

2009-11-24 Thread ewen fortune
Johan,

Yes, there are built in parsers for different formats, for example I
was using the general log.
mk-log-player --split Thread_id --type genlog

(genlog was added the other day and is only in trunk so far)

http://www.maatkit.org/doc/mk-log-player.html

--type

type: string; group: Split

The type of log to --split (default slowlog). The permitted types are

binlog

Split a binary log file.
slowlog

Split a log file in any varation of MySQL slow-log format.

Cheers,

Ewen

On Tue, Nov 24, 2009 at 2:41 PM, Johan De Meersman  wrote:
> Ewen,
>
> Do you need a specific log format or setting ? I'm debugging the tool, and
> it uses ";\n#" as record separator, which is entirely not consistent with
> the log format I get out of the mysql log. Does it perchance try to parse
> zero-execution-time slowlogs instead of the regular log ?
>
>
> On Sat, Nov 14, 2009 at 1:23 AM, Johan De Meersman 
> wrote:
>>
>> hmm, I got segfaults. i,ll check after the weekend.
>>
>> On 11/13/09, ewen fortune  wrote:
>> > Johan,
>> >
>> > What does? mk-log-player? - I just used it to split and play back 8G,
>> > no problem.
>> >
>> >
>> > Ewen
>> >
>> > On Fri, Nov 13, 2009 at 6:20 PM, Johan De Meersman 
>> > wrote:
>> >> It seems to have a problem with multi-gigabyte files :-D
>> >>
>> >> On Fri, Nov 13, 2009 at 5:35 PM, Johan De Meersman 
>> >> wrote:
>> >>>
>> >>> Ooo, shiny ! Thanks, mate :-)
>> >>>
>> >>> On Fri, Nov 13, 2009 at 4:56 PM, ewen fortune 
>> >>> wrote:
>> 
>>  Johan,
>> 
>>  The very latest version of mk-log-player can do that.
>>  If you get the version from trunk:
>> 
>>  wget http://www.maatkit.org/trunk/mk-log-player
>> 
>>  mk-log-player --split Thread_id --type genlog
>> 
>>  Cheers,
>> 
>>  Ewen
>> 
>>  On Fri, Nov 13, 2009 at 4:33 PM, Johan De Meersman
>>  
>>  wrote:
>>  > Hey all,
>>  >
>>  > I'm looking for a Mysql benchmarking/stresstesting tool that can
>>  > generate a
>>  > workload based on standard Mysql full query log files. The idea is
>>  > to
>>  > verify
>>  > performance of real production loads on various database setups.
>>  >
>>  > Does anyone know of such a tool, free or paying ?
>>  >
>>  > Thx,
>>  > Johan
>>  >
>> >>>
>> >>
>> >>
>> >
>
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL Performance with large data

2009-11-24 Thread Michael Dykman
I second that RAID 10 with as many spindles as you can get
recommendation..  for any kind of load, even read-only load, you are
going to need it.

Also, that 8G of RAM is paltry for the kind of dataset you propose.
As already noted, the particulars will come down to the types and
frequency of the queries (not to mention expected performance targets)
but 4x64 CPUs churning that kind of data could really take advantage
of a lot more RAM.

 - michael dykman


On Tue, Nov 24, 2009 at 12:25 PM, Johan De Meersman  wrote:
> First off, for 4.000.000.000 records at 1867 byte per record, you're gonna
> need more storage than that (over 1.6 terabyte if I did my maths right) ,
> unless you're using compressed tables - then your requirements will strongly
> depend on the actual data: text may easily compress to a factor ten, images
> (blobs?) almost not. Compressed tables will also speed up your I/O, in
> exchange for some more CPU load.
>
> On such a dataset, table scans are going to be geologically slow, so yes,
> good indexes will be your saviour :-)
>
> For speed, I'd also recommend that you get a RAID-10 setup. Go for a maximum
> amount of spindles, too - some form of SAN or locally-attached storage boxes
> with (relatively) small-capacity high-rpm disks.
>
>
>
> On Tue, Nov 24, 2009 at 5:39 PM, Manish Ranjan (Stigasoft) <
> manish.ran...@stigasoft.com> wrote:
>
>> Thank you Johan.
>>
>>
>>
>> The table will be read only. There will be two steps - first to get the
>> count using search conditions and then to get data from some columns based
>> on those search conditions. The fields will be indexed as per search
>> requirements.
>>
>>
>>
>>  _
>>
>> From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
>> Meersman
>> Sent: Tuesday, November 24, 2009 9:56 PM
>> To: Manish Ranjan (Stigasoft)
>> Cc: mysql@lists.mysql.com
>> Subject: Re: MySQL Performance with large data
>>
>>
>>
>> The amount and type of data is less the issue than the amount and type of
>> queries is :-) The machine you've described should be able to handle quite
>> a
>> bit of load, though, if well-tuned.
>>
>> On Tue, Nov 24, 2009 at 4:45 PM, Manish Ranjan (Stigasoft)
>>  wrote:
>>
>> Hi,
>>
>>
>>
>> I am using MySQL 5.0.45 in production environment. One of my tables (using
>> MyISAM Engine) is expected to have around 4 billion records and each record
>> will have 1867 bytes of data. All fields in this table are of character
>> data
>> type. I have 8 GB RAM on the server, RAID 5 with 750 GB storage space
>> available and quad core processor.
>>
>> My question is whether MySQL will be able to handle queries on this amount
>> of data? What all things I need to consider here?
>>
>> Thank you.
>>
>>
>>
>>
>



-- 
 - michael dykman
 - mdyk...@gmail.com

"May you live every day of your life."
Jonathan Swift

Larry's First Law of Language Redesign: Everyone wants the colon.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL Performance with large data

2009-11-24 Thread Johan De Meersman
First off, for 4.000.000.000 records at 1867 byte per record, you're gonna
need more storage than that (over 1.6 terabyte if I did my maths right) ,
unless you're using compressed tables - then your requirements will strongly
depend on the actual data: text may easily compress to a factor ten, images
(blobs?) almost not. Compressed tables will also speed up your I/O, in
exchange for some more CPU load.

On such a dataset, table scans are going to be geologically slow, so yes,
good indexes will be your saviour :-)

For speed, I'd also recommend that you get a RAID-10 setup. Go for a maximum
amount of spindles, too - some form of SAN or locally-attached storage boxes
with (relatively) small-capacity high-rpm disks.



On Tue, Nov 24, 2009 at 5:39 PM, Manish Ranjan (Stigasoft) <
manish.ran...@stigasoft.com> wrote:

> Thank you Johan.
>
>
>
> The table will be read only. There will be two steps - first to get the
> count using search conditions and then to get data from some columns based
> on those search conditions. The fields will be indexed as per search
> requirements.
>
>
>
>  _
>
> From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
> Meersman
> Sent: Tuesday, November 24, 2009 9:56 PM
> To: Manish Ranjan (Stigasoft)
> Cc: mysql@lists.mysql.com
> Subject: Re: MySQL Performance with large data
>
>
>
> The amount and type of data is less the issue than the amount and type of
> queries is :-) The machine you've described should be able to handle quite
> a
> bit of load, though, if well-tuned.
>
> On Tue, Nov 24, 2009 at 4:45 PM, Manish Ranjan (Stigasoft)
>  wrote:
>
> Hi,
>
>
>
> I am using MySQL 5.0.45 in production environment. One of my tables (using
> MyISAM Engine) is expected to have around 4 billion records and each record
> will have 1867 bytes of data. All fields in this table are of character
> data
> type. I have 8 GB RAM on the server, RAID 5 with 750 GB storage space
> available and quad core processor.
>
> My question is whether MySQL will be able to handle queries on this amount
> of data? What all things I need to consider here?
>
> Thank you.
>
>
>
>


Re: phpMyAdmin links?

2009-11-24 Thread Jan Steinman
Is there a way to have a permalink to pages in phpMyAdmin,  
particularly record editing pages?


I successfully did it -- for a while -- by simply copying the URL when  
I was on a record editing page and replacing the obvious key GET  
parameter with a variable. But that URL has a GET parameter called  
"token" with an opaque identifier that I'll bet is a session handle.  
You can't go to the record without it, and if it's included, it stops  
working after some period of time.


I'm doing some custom web database work, and for debug and admin  
purposes only, I want to create a link whenever a primary key is shown  
that will allow "let me at the raw data, dammit" editing.


Thanks for whatever help you can offer, including suggesting a  
different list or approach or tool. (I haven't looked for a phpMyAdmin  
list yet.)


 The light at the end of the tunnel is a man with a flashlight  
yelling, "Go back! Go back!" -- Sol Stein 

 Jan Steinman  




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: MySQL Performance with large data

2009-11-24 Thread Manish Ranjan (Stigasoft)
Thank you Johan.

 

The table will be read only. There will be two steps - first to get the
count using search conditions and then to get data from some columns based
on those search conditions. The fields will be indexed as per search
requirements.

 

  _  

From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
Meersman
Sent: Tuesday, November 24, 2009 9:56 PM
To: Manish Ranjan (Stigasoft)
Cc: mysql@lists.mysql.com
Subject: Re: MySQL Performance with large data

 

The amount and type of data is less the issue than the amount and type of
queries is :-) The machine you've described should be able to handle quite a
bit of load, though, if well-tuned.

On Tue, Nov 24, 2009 at 4:45 PM, Manish Ranjan (Stigasoft)
 wrote:

Hi,



I am using MySQL 5.0.45 in production environment. One of my tables (using
MyISAM Engine) is expected to have around 4 billion records and each record
will have 1867 bytes of data. All fields in this table are of character data
type. I have 8 GB RAM on the server, RAID 5 with 750 GB storage space
available and quad core processor.

My question is whether MySQL will be able to handle queries on this amount
of data? What all things I need to consider here?

Thank you.

 



Re: MySQL Performance with large data

2009-11-24 Thread Johan De Meersman
The amount and type of data is less the issue than the amount and type of
queries is :-) The machine you've described should be able to handle quite a
bit of load, though, if well-tuned.

On Tue, Nov 24, 2009 at 4:45 PM, Manish Ranjan (Stigasoft) <
manish.ran...@stigasoft.com> wrote:

> Hi,
>
>
>
> I am using MySQL 5.0.45 in production environment. One of my tables (using
> MyISAM Engine) is expected to have around 4 billion records and each record
> will have 1867 bytes of data. All fields in this table are of character
> data
> type. I have 8 GB RAM on the server, RAID 5 with 750 GB storage space
> available and quad core processor.
>
> My question is whether MySQL will be able to handle queries on this amount
> of data? What all things I need to consider here?
>
> Thank you.
>
>


Re: How normal mysql server 5.1 uses multiple cores

2009-11-24 Thread mos

At 06:44 AM 11/24/2009, you wrote:

2009/11/24 Johan De Meersman :
> If you are wondering about parallel query execution (that is, splitting a
> single query over multiple cores for faster execution), that is currently
> not supported by MySQL.

[offtopic]
Probably is something stupid, but could that be done with ndb cluster
on a single host? Anyway, I suppose performance loses on distributed
joins and so on would outcome multiple-core benefits. And for most
queries, the bottleneck is usually on disk access, not processor. Has
anybody done any serious testing on this?


Jaime,
   Well it all depends on the SQL that is being executed, the table 
structure and the size of the query. Now for a particular case you can do 
your own benchmarking quite easily to see if disk speed is more relevant 
than CPU speed. Copy your tables into a MEMORY table and do the joins 
there. Compare that to a disk join (reset the query cache) and see the 
improvement.  I'm guessing you will probably see a 300% improvement over 
disk. As mentioned earlier, MySQL does not scale up very well with multiple 
processors which is why it is better to scale out horizontally with clusters.


Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL Performance with large data

2009-11-24 Thread Manish Ranjan (Stigasoft)
Hi,

 

I am using MySQL 5.0.45 in production environment. One of my tables (using
MyISAM Engine) is expected to have around 4 billion records and each record
will have 1867 bytes of data. All fields in this table are of character data
type. I have 8 GB RAM on the server, RAID 5 with 750 GB storage space
available and quad core processor. 

My question is whether MySQL will be able to handle queries on this amount
of data? What all things I need to consider here? 

Thank you.



Re: How normal mysql server 5.1 uses multiple cores

2009-11-24 Thread Jaime Crespo Rincón
2009/11/24 Johan De Meersman :
> If you are wondering about parallel query execution (that is, splitting a
> single query over multiple cores for faster execution), that is currently
> not supported by MySQL.

[offtopic]
Probably is something stupid, but could that be done with ndb cluster
on a single host? Anyway, I suppose performance loses on distributed
joins and so on would outcome multiple-core benefits. And for most
queries, the bottleneck is usually on disk access, not processor. Has
anybody done any serious testing on this?

-- 
Jaime Crespo
MySQL & Java Instructor
Warp Networks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How normal mysql server 5.1 uses multiple cores

2009-11-24 Thread Johan De Meersman
the command 'top -H' will show you the individual threads with their CPU
use, but I'm afraid I don't know how to link that up with a MySQL 'show
processlist'.


On Tue, Nov 24, 2009 at 12:22 PM, Manasi Save <
manasi.s...@artificialmachines.com> wrote:

> Hi Johan,
>
> Thanks for the quick response.
>
> Is there any command available in mysql using which I can check how much
> CPU is being used by each mysql thread. Or any article where how mysql
> multi-threading works.
>
> --
> Thanks and Regards,
> Manasi Save
> Artificial Machines Pvt Ltd.
>
> > MySQL is already a multithreaded process, even though you only see a
> > single
> > process. Note that it doesn't scale very well above eight or so cores,
> > especially InnoDB iirc.
> >
> > If you are wondering about parallel query execution (that is, splitting a
> > single query over multiple cores for faster execution), that is currently
> > not supported by MySQL.
> >
> >
> > On Tue, Nov 24, 2009 at 12:02 PM, Manasi Save <
> > manasi.s...@artificialmachines.com> wrote:
> >
> >> Hi All,
> >>
> >> Can anyone provide me any input on How to make mysql use multiple CPU
> >> cores avaliable.
> >>
> >> I am sorry if I am souding very unclear with this. Let me know if you
> >> have
> >> any questions.
> >>
> >> Thanks in advance.
> >>
> >> --
> >> Regards,
> >> Manasi Save
> >> Artificial Machines Pvt Ltd.
> >>
> >>
> >>
> >>
> >> --
> >> MySQL General Mailing List
> >> For list archives: http://lists.mysql.com/mysql
> >> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
> >>
> >>
> >
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>
>


Re: How normal mysql server 5.1 uses multiple cores

2009-11-24 Thread Manasi Save
Hi Johan,

Thanks for the quick response.

Is there any command available in mysql using which I can check how much
CPU is being used by each mysql thread. Or any article where how mysql
multi-threading works.

-- 
Thanks and Regards,
Manasi Save
Artificial Machines Pvt Ltd.

> MySQL is already a multithreaded process, even though you only see a
> single
> process. Note that it doesn't scale very well above eight or so cores,
> especially InnoDB iirc.
>
> If you are wondering about parallel query execution (that is, splitting a
> single query over multiple cores for faster execution), that is currently
> not supported by MySQL.
>
>
> On Tue, Nov 24, 2009 at 12:02 PM, Manasi Save <
> manasi.s...@artificialmachines.com> wrote:
>
>> Hi All,
>>
>> Can anyone provide me any input on How to make mysql use multiple CPU
>> cores avaliable.
>>
>> I am sorry if I am souding very unclear with this. Let me know if you
>> have
>> any questions.
>>
>> Thanks in advance.
>>
>> --
>> Regards,
>> Manasi Save
>> Artificial Machines Pvt Ltd.
>>
>>
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>>
>>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How normal mysql server 5.1 uses multiple cores

2009-11-24 Thread Johan De Meersman
MySQL is already a multithreaded process, even though you only see a single
process. Note that it doesn't scale very well above eight or so cores,
especially InnoDB iirc.

If you are wondering about parallel query execution (that is, splitting a
single query over multiple cores for faster execution), that is currently
not supported by MySQL.


On Tue, Nov 24, 2009 at 12:02 PM, Manasi Save <
manasi.s...@artificialmachines.com> wrote:

> Hi All,
>
> Can anyone provide me any input on How to make mysql use multiple CPU
> cores avaliable.
>
> I am sorry if I am souding very unclear with this. Let me know if you have
> any questions.
>
> Thanks in advance.
>
> --
> Regards,
> Manasi Save
> Artificial Machines Pvt Ltd.
>
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>
>


How normal mysql server 5.1 uses multiple cores

2009-11-24 Thread Manasi Save
Hi All,

Can anyone provide me any input on How to make mysql use multiple CPU
cores avaliable.

I am sorry if I am souding very unclear with this. Let me know if you have
any questions.

Thanks in advance.

-- 
Regards,
Manasi Save
Artificial Machines Pvt Ltd.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org