Re: loading 0x00A0 into mysql

2014-01-07 Thread Dobromir Velev
Hi,
Can you try passing the dump file through hexdump  or some binary editor to 
see if the data is there. Most text editors will treat 0x00 as end of string 
and this most likely this is causing the problem.

Additionally you can try running the import with  --default-character-set=utf8 
in case the default charset is something else
 
like this 

mysql --default-character-set=utf8 --user=me test_database  < dump_file


Dobromir

Thanks for the reply, and I apologize because I expect I've broken
threading. The list isn't mailing the posts to me, so I've nothing to
reply to. I've had to cut and paste from the web archive...

 2014/01/06 12:18 +, Dave Howorth 
>> Everything appears to work except that text fields containing a
>> Unicode non-breaking space (0x00A0) are truncated just before that
>> character. I can see the field in the dump file and it looks OK, but
>> it doesn't all make it into the new database.

> Well, there are too many aspects to this, but the first is the
> character set that "mysql" expects for input. If, say, it is USASCII
> (note that between the character set that "mysql" takes for input and
> the character set in the table no association is needful), the "nbsp"
> is out of range.

Hmm, is there any way to tell what character set mysql expects, or
better yet to tell it what to read? Or can I tell mysqldump to encode
its output differently?

(I promise to RTFM, but want to get this question out there whilst I'm
reading!)

> (It is, of course, not nice if "mysqldump" yields an output that
> "mysql" cannot read.)

Indeed; I'd go so far as to call that a bug. But that does seem to be
what's happening.

> Try entering it with some escape-sequence (this one is based on the
> original SQL with features from PL1, not from C, which MySQL supports
> if 'ANSI' is in "sql_mode"):

I don't understand the 'sql_mode', though I expect I can look that up
too. But I did try these:

> 'some text ... ' || X'A0' || ' ... more text ...'

causes the contents of the field to be '1'.

> or (slightly less PL1)
>
> CONCAT('some text ... ', X'A0', ' ... more text ...')

Produces the same effect as embedding the character directly. i.e. the
value of the field is truncated just before the problem character.

However, substituting for the character with the string ' ' does
allow mysql to read past it. I've now discovered that it also blows up
on some other characters with the top bit set such as 0x91. What's
strange about that is that they used to work. So my first thought now is
that something has changed recently. Perhaps an update to one of the
servers or clients involved? I don't remember changing anything in my
code, but I can't be absolutely sure.

Cheers, Dave

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: multiple choice dropdown box puzzle

2009-02-23 Thread Dobromir Velev
Hi,
It will never work this way. MySQL has no knowledge of the php variable 
$categoriesIN

you can try something like this:



And you should always sanitize the input from forms - the above example is 
just to show you how this works. You should check all the values in 
$categoriesIN whether they are the ones you are expecting from the form.

Hope this helps
Dobromir Velev
WebSitePulse


On Monday 23 February 2009 17:25, PJ wrote:
> I think this is a tough one... and way above my head:
> PLEASE READ ALL OF THE ABOVE TO UNDERSTAND WHAT I AM TRYING TO DO.
> Having a bit of a rough time figuring out how to formulate php-mysql to
> insert data into fields using a multiple dropdown box in a form.
>
> to post I am using the following:
> snip...
> $categoriesIN = $_POST["categoriesIN"];
>
> ...snip...
>
> 
>   Choose Categories...
>   1
>   2
>   3
>   4
>   5
>   
>
> ...snip...
>
> $sql4 = "FOR ( $ii = 0 ; $ii < count($categoriesIN) ; $ii++ )
>   INSERT INTO temp (example) $categoriesIN[$ii]" ;
>   $result4 = mysql_query($sql4, $db);
> ...snip
>
> this does not work! The other posts work like a charm... but this...
>
> I cannot figure out what I should be entering where... I have tried several
> different configurations, but nothing seems to work...
>
> I found this as a model for entering the selections but can't figure out
> how to modify it for my needs:
>
> 
>  Choose your location(s) 
> 3100
> 3105
>  3503
>  3504
> 
>
> What I would like to do is something like the following:
> 
>   Choose Categories...
>   History
>   Temples
>   Pharaohs and Queens
>   Cleopatra
>   Mummies
>   
> and going further, I would like to be able to use a table that actually
> holds these values to feed them to the code above. I am sure this is
> possible but it must take some huge knowledge and experience to do it.
>
> BUT ...
> as I look at things, I am wondering if the FOR statement in the above
> should be used to do several INSERTs, that is, one $sql(number) per
> selected category... now, would that require many $sqls or many INSERTs
> within the $sql ?
>
>
> --
>
> Phil Jourdan --- p...@ptahhotep.com
>http://www.ptahhotep.com
>http://www.chiccantine.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why do quotes in an IN() clause effect performance so drastically?

2009-02-18 Thread Dobromir Velev
Hi,
I guess the id_file_set is an INT?

The problem si most likely due to the fact you are comparing integer to 
string, which forces MySQL to use type conversion.  For more information 
check http://dev.mysql.com/doc/refman/5.0/en/type-conversion.html

When type conversion occurs MySQL will not be able to use the index and will 
have to do a full table scan which can be seen from your explain queries. As 
for the 'bogus' case most likely it has been dropped because it cannot be 
converted to integer. This case is explained at

http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html

Hope this helps.

Regards
Dobromr Velev



On Wednesday 18 February 2009 05:25, Daevid Vincent wrote:
> I'm really confused. First, I don't understand why quoting my IN()
> values here caused them to run significantly slower than the non-quoted
> versions... on just this simple contrived example it can be as much as
> 2.2 seconds vs. 0 seconds to return on a table that has 2.5M rows.
>
> The problem I'm facing is that the stupid PEAR::DB class is
> "smart-quoting" a list of values and giving me this:
>
> mysql> explain select * from bite_event_log where id_file_set in
> ('-1','2412948') limit 1;
> ++-++--+---+--+
>-+--+-+-+
>
> | id | select_type | table  | type | possible_keys | key  |
>
> key_len | ref  | rows| Extra   |
> ++-++--+---+--+
>-+--+-+-+
>
> |  1 | SIMPLE  | bite_event_log | ALL  | id_file_set   | NULL | NULL
> | NULL | 1213328 | Using where |
>
> ++-++--+---+--+
>-+--+-+-+
>
> But what I really want is for it to do this:
>
> mysql> explain select * from bite_event_log where id_file_set in
> (-1,2412948) limit 1;
> ++-++---+---+-+
>-+--+--+-+
>
> | id | select_type | table  | type  | possible_keys | key
> | key_len | ref  | rows | Extra   |
>
> ++-++---+---+-+
>-+--+--+-+
>
> |  1 | SIMPLE  | bite_event_log | range | id_file_set   |
>
> id_file_set | 5   | NULL |2 | Using where |
> ++-++---+---+-+
>-+--+--+-+
>
> Mixing quoted and non-quoted is said to be "bad"
> http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_i
>n
>
> mysql> explain select * from bite_event_log where id_file_set in
> ('-1',2412948) limit 1;
> ++-++--+---+--+
>-+--+-+-+
>
> | id | select_type | table  | type | possible_keys | key  |
>
> key_len | ref  | rows| Extra   |
> ++-++--+---+--+
>-+--+-+-+
>
> |  1 | SIMPLE  | bite_event_log | ALL  | id_file_set   | NULL | NULL
> | NULL | 1213328 | Using where |
>
> ++-++--+---+--+
>-+--+-+-+
>
> However, aside from the straight numerical one above (2nd down), this
> version is the second best performing!?
>
> And furthermore, using a word string like "bogus" significantly
> out-performs another string such as "-1". Huh?!? WTF?
> It's like mySQL was "smart enough" to know that "bogus" could be
> dropped, whereas it's not smart enough to know to drop "-1",
> despite the fact that the id_file_set column is an unsigned integer.
>
> mysql> explain select * from bite_event_log where id_file_set in
> ('bogus',2412948) limit 1;
> ++-++---+---+-+
>-+--+--+-+
>
> | id | select_type | table  | type  | possible_keys | key
> | key_len | ref  | rows | Extra   |
>
> ++-++---+---+-+
>-+--+--+-+
>
> |  1 | SIMPLE  | bite_event_log | range | id_file_set   |
>
> id_file_set | 5   | NULL |2 | Using where |
> ++-++---+---+-+
>-+--+--+-+

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Symlink InnoDB tables without stopping MySQL

2008-04-24 Thread Dobromir Velev
Hi,
I guessed it was something like it and that is why I wanted to make sure how 
it should be done. Using the ALTER TABLE table DISCARD TABLESPACE doesn't 
seem to work as expected - I succeeded to crash the test server twice. See 
the mysql log details below.

What I did was the following:
 - create table
 - check INNODDB status and copy the table.ibd  to a new location
 - run ALTER TABLE table DISCARD TABLESPACE
 - symlink the table.ibd copy within the database folder
 - run ALTER TABLE table IMPORT TABLESPACE
 - run show table status like 'table';

And here are the crash details from the log.

InnoDB: buf pool start is at 0x3666c000, end at 0xb366c000
InnoDB: Probable reason is database corruption or memory
InnoDB: corruption. If this happens in an InnoDB database recovery,
InnoDB: you can look from section 6.1 at http://www.innodb.com/ibman.html
InnoDB: how to force recovery.
080424  4:31:55InnoDB: Assertion failure in thread 68795312 in 
file ./../include/buf0buf.ic line 262
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html
InnoDB: about forcing recovery.
InnoDB: Thread 73976752 stopped in file ./../include/sync0sync.ic line 111
InnoDB: Thread 729131952 stopped in file sync0arr.c line 336
InnoDB: Thread 150207408 stopped in file sync0arr.c line 336
InnoDB: Thread 747498416 stopped in file sync0arr.c line 336
InnoDB: Thread 63421360 stopped in file ./../include/sync0sync.ic line 111
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=1048576000
read_buffer_size=507904
max_used_connections=601
max_connections=600
threads_connected=394
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 
1935995 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=0x2c24e950
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0x4197e0c, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x8136da4
0x438898
(nil)
0x8299f88
0x829a024
0x81c2f5b
0x81d6f60
0x814a563
0x814e66c
0x814f08a
0x814f8e5
0x8150330
0x432371
0x38cffe
New value of fp=(nil) failed sanity check, terminating stack trace!
Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and 
follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do
resolve it
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0xbb66460 = show table status like 'temp%'
thd->thread_id=2545123
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.

Number of processes running now: 0
080424 04:31:56  mysqld restarted
080424  4:31:57  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.

Thanks for your help
Dobromir Velev

On Wednesday 23 April 2008 22:05, Jerry Schwartz wrote:
> If Linux works the same way as HP-UX (and it should), anything you do to an
> open file (including deleting it) has no effect until the file is closed.
> The MySQL server is still using the "old" file. The next time it stops and
> restarts, it will follow the symlink. I don't know what the effect of
> accessing a "stale" copy of the file will do.
>
> Regards,
>
> Jerry Schwartz
> The Infoshop by Global Information Incorporated
> 195 Farmington Ave.
> Farmington, CT 06032
>
> 860.674.8796 / FAX: 860.674.8341
>
> www.the-infoshop.com
> www.giiexpress.com
> www.etudes-marche.com
>
> >-Original Message-
> >From: Sebastian Mendel [mailto:[EMAIL PROTECTED]
> >Sent: Wednesday, April 23, 2008 9:27 AM
> >To: Dobromir Velev
> >Cc: mysql@lists.mysql.com
> >Subject: Re: Symlink InnoDB tables without stoping MySQL
> >
> >Dobromir Velev schrieb:
> >> Hi,
> >> What I'm trying to do is to create a new InnoDB table on a different
> >
> >disk and
> >
> >> symlink it to an existing database.
> >> I have innod

Re: Symlink InnoDB tables without stoping MySQL

2008-04-23 Thread Dobromir Velev
Hi,
Thanks for pointing it out - I just found the following commands.

ALTER TABLE tbl_name DISCARD TABLESPACE;
ALTER TABLE tbl_name IMPORT TABLESPACE;

 I will test it and let you know if it works

Thanks
Dobromir Velev



On Wednesday 23 April 2008 16:27, Sebastian Mendel wrote:
> Dobromir Velev schrieb:
> > Hi,
> > What I'm trying to do is to create a new InnoDB table on a different disk
> > and symlink it to an existing database.
> > I have innodb_file_per_table turned on and here is how I tried to do it
> >
> >
> > mysql> \u test
> > mysql> create table test (...) ENGINE  = 'InnoDB';
> > mysql>\q
> >
> > move the test.ibd file to the other disk
> > create a simlink in the database directory
> > flush tables;
> >
> >
> > This works as expected but there is something that bothers me - I
> > inserted about 60K rows in the new table and all queries I tried are
> > working including selects, inserts and updates. The "SHOW TABLE STATUS"
> > command displays relevant results and still the test.ibd file to which
> > the symlink points hasn't been changed or accessed at all.
> >
> > Any ideas are welcome
>
> you need to setup per-table tablespace, did you?
>
> Section 13.2.3.1, “Using Per-Table Tablespaces”.
>
> http://dev.mysql.com/doc/refman/5.0/en/innodb-init.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Symlink InnoDB tables without stoping MySQL

2008-04-23 Thread Dobromir Velev
Hi,
What I'm trying to do is to create a new InnoDB table on a different disk and 
symlink it to an existing database.
I have innodb_file_per_table turned on and here is how I tried to do it


mysql> \u test
mysql> create table test (...) ENGINE  = 'InnoDB';
mysql>\q

move the test.ibd file to the other disk
create a simlink in the database directory
flush tables;


This works as expected but there is something that bothers me - I inserted 
about 60K rows in the new table and all queries I tried are working  
including selects, inserts and updates. The "SHOW TABLE STATUS" command 
displays relevant results and still the test.ibd file to which the symlink 
points hasn't been changed or accessed at all.

Any ideas are welcome
Dobromir Velev



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Corruption? Performance issue + strange 'explain'

2008-01-22 Thread Dobromir Velev
Hi,
Have you tried changing the date format like 

delete from TelecomAccountPosting where LocID=19014 and InvDate='2008-01-15';

I know it shouldn't matter but I have had similar issues due to the date 
format.

Also are you sure there are 3773 records matching this criteria - according to 
your table structure there is an UNIQUE index on LocID and InvDate so there 
should not be more than one record.

Dobromir Velev
WebSitePulse.com

On Tuesday 22 January 2008 02:39, Daniel Kasak wrote:
> On Tue, 2008-01-22 at 11:23 +1100, Chris wrote:
> > > Why is it saying 'Impossible WHERE noticed after reading const tables'?
> >
> > http://dev.mysql.com/doc/refman/4.1/en/explain.html
> > MySQL has read all const (and system) tables and notice that the WHERE
> > clause is always false.
> >
> > ie - no rows match that query and so there's nothing to 'explain'.
>
> There must be a problem then. In this particular example, there were
> 3773 records returned by this select.
>
> > I'd suspect that the time is spent trying to check or clean up the
> > foreign key reference. Are there lots of locations with that id in the
> > tlocations table?
>
> 1
>
> > I'd also assume that since it's named 'id' it would be
> > a primary key (and indexed) ?
>
> Indexed, yes.
>
> I'll go ahead with that restore from a backup tonight.
>
> --
> Daniel Kasak
> IT Developer
> NUS Consulting Group
> Level 5, 77 Pacific Highway
> North Sydney, NSW, Australia 2060
> T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
> email: [EMAIL PROTECTED]
> website: http://www.nusconsulting.com.au

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Memory Problem causing mysql to crash

2006-05-08 Thread Dobromir Velev
Hi,
I'm aware of the fact that this is a 32 bit system - and  I've tried to make 
sure that mysqld will not use more than 4 GB. As you can see the 
innodb_buffer_pool_size is 2 Gb and the total amount of memory used by the 
MyISAM key buffer size and  the per thread variables is less then 2 GB. There 
are no other services on this machine so the memory should not be a problem.

This server was working fine for almost a year until recently it started 
crashing. Could it be some memory problem I've ran into and can you suggest 
anything I can do to avoid similar problems in the future.

Thanks
Dobromir Velev


On Saturday 06 May 2006 01:23, Heikki Tuuri wrote:
> Dobromir,
>
> you are running a 32-bit operating system. Then the size of the mysqld
> process is limited to 2 GB, or at most to 4 GB. The amount of total RAM 8
> GB does not help here, since 2^32 = 4 G.
>
> You should reduce the key_buffer_size or innodb_buffer_pool_size in my.cnf.
>
> Best regards,
>
> Heikki
>
> Oracle Corp./Innobase Oy
> InnoDB - transactions, row level locking, and foreign keys for MySQL
>
> InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
> tables
> http://www.innodb.com/order.php
>
> - Original Message -
> From: ""sheeri kritzer"" <[EMAIL PROTECTED]>
> Newsgroups: mailing.database.myodbc
> Sent: Friday, May 05, 2006 10:50 PM
> Subject: Re: InnoDB Memory Problem causing mysql to crash
>
> > Well, according to my calculations:
> > innodb_buffer_pool_size + key_buffer_size
> > + max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size)
> > + max_connections*2MB
> >
> > (I used the default binlog_cache_size value of 32K plus your settings)
> >
> > MySQL could use up to 4.991913 G of memory.  Shouldn't be a problem,
> > unless of course your 8G of machine is running something other than
> > MySQL.  Is it?  Because the fact that it could not allocate memory
> > means that something was trying to use memory that didn't exist
> >
> > Did MySQL dump a core file?
> >
> > Did you follow this advice?
> >
> >> You seem to be running 32-bit Linux and have 473 concurrent connections.
> >> If you have not changed STACK_SIZE in LinuxThreads and built the binary
> >> yourself, LinuxThreads is quite likely to steal a part of the global
> >> heap=
> >
> > for
> >
> >> the thread stack. Please read http://www.mysql.com/doc/L/i/Linux.html
> >
> > Did you read the man page?
> >
> >> The manual page at http://www.mysql.com/doc/en/Crashing.html contains
> >> information that should help you find out what is causing the crash.
> >
> > Also, did you try to look at your slow query logs to see if there was
> > some kind of query hogging memory?  What about backups running at the
> > same time?
> >
> > I'll note that you maxxed out your connections, which shouldn't cause
> > a crash, but might indicate that your server tuning is not up-to-date
> > with your actual usage.
> >
> > Are your data and logfiles are on a diffferent partitions?  We had
> > problems with one machine where the data and logfiles were on the same
> > partition, and it would crash -- we moved to a machine that was the
> > same except for the different OS partitions, and it didn't crash!  We
> > figure the disk seeking just killed the OS so it segfaulted the mysql
> > process.
> >
> > -Sheeri
> >
> > On 5/4/06, Dobromir Velev <[EMAIL PROTECTED]> wrote:
> >> Hi,
> >> I'm trying to resolve why InnoDB is crashing. It happened twice for the
> >> l=
> >
> > ast
> >
> >> month without obvoius reason
> >>
> >> Any help will be appreciated.
> >>
> >> Dobromir Velev
> >>
> >> My Server is
> >> Red Hat Enterprise Linux ES release 3 (Taroon Update 7)
> >> 2.4.21-32.0.1.ELs=
> >
> > mp
> >
> >> Dual 3.2 GHz Intel Xeon
> >> 8 GB RAM
> >> with 3 x 146GB SCA Ultra 320 10K RPM SCSI Drives
> >>
> >>
> >> my.cnf settings
> >>
> >> innodb_buffer_pool_size=3D2000M
> >> innodb_additional_mem_pool_size=3D20M
> >> innodb_log_file_size=3D150M
> >> innodb_log_buffer_size=3D8M
> >> innodb_flush_log_at_trx_commit=3D0
> >> innodb_lock_wait_timeout=3D50
> >> key_buffer_size=3D1000M
> >> read_buffer_size=3D500K
> >> read_rnd_buffer_size=3D1200K
> >> sort_buffer_size=3D1M
> >> thread_cache=3D256
> >> thread

InnoDB Memory Problem causing mysql to crash

2006-05-04 Thread Dobromir Velev
Hi,
I'm trying to resolve why InnoDB is crashing. It happened twice for the last 
month without obvoius reason

Any help will be appreciated.

Dobromir Velev

My Server is 
Red Hat Enterprise Linux ES release 3 (Taroon Update 7) 2.4.21-32.0.1.ELsmp
Dual 3.2 GHz Intel Xeon
8 GB RAM
with 3 x 146GB SCA Ultra 320 10K RPM SCSI Drives


my.cnf settings

innodb_buffer_pool_size=2000M
innodb_additional_mem_pool_size=20M
innodb_log_file_size=150M
innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=0
innodb_lock_wait_timeout=50
key_buffer_size=1000M
read_buffer_size=500K
read_rnd_buffer_size=1200K
sort_buffer_size=1M
thread_cache=256
thread_concurrency=8
thread_stack=126976
myisam_sort_buffer_size=64M
max_connections=600


The error log shows the following message:

InnoDB: Fatal error: cannot allocate 1048576 bytes of
InnoDB: memory with malloc! Total allocated memory
InnoDB: by InnoDB 2263507272 bytes. Operating system errno: 12
InnoDB: Cannot continue operation!
InnoDB: Check if you should increase the swap file or
InnoDB: ulimits of your operating system.
InnoDB: On FreeBSD check you have compiled the OS with
InnoDB: a big enough maximum process size.
InnoDB: We now intentionally generate a seg fault so that
InnoDB: on Linux we get a stack trace.
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=1048576000
read_buffer_size=507904
max_used_connections=600
max_connections=600
threads_connected=473
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 
1935995 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

You seem to be running 32-bit Linux and have 473 concurrent connections.
If you have not changed STACK_SIZE in LinuxThreads and built the binary
yourself, LinuxThreads is quite likely to steal a part of the global heap for
the thread stack. Please read http://www.mysql.com/doc/L/i/Linux.html

thd=(nil)
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0xbff1f558, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x8072d74
0x826d678
0x8213c74
0x8213d04
0x8218b84
0x81d5ba6
0x80fd659
0x826ae2c
0x82a0cda
New value of fp=(nil) failed sanity check, terminating stack trace!
Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and 
follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do
resolve it
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.

Number of processes running now: 0
060503 16:37:21  mysqld restarted
060503 16:37:21 Can't start server: Bind on TCP/IP port: Address already in 
use
060503 16:37:21 Do you already have another mysqld server running on port: 
3306 ?
060503 16:37:21 Aborting

and the resolved stack trace is 

0x8072d74 handle_segfault + 420
0x826d678 pthread_sighandler + 184
0x8213c74 ut_malloc_low + 132
0x8213d04 ut_malloc + 20
0x8218b84 os_aio_simulated_handle + 916
0x81d5ba6 fil_aio_wait + 214
0x80fd659 io_handler_thread + 25
0x826ae2c pthread_start_thread + 220
0x82a0cda thread_start + 4




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Scripting Issues

2005-11-11 Thread Dobromir Velev
Hi,
Why don't you run the CREATE TABLE query with the IF NOT EXISTS clause
then your script could look like this

CREATE TABLE IF NOT EXISTS table_name ...;
ALTER TABLE table_name;

If the table already exists only the ALTER statement will be executed, 
otherwise the ALTER statement will not do anything.

Check the MySQL Reference for more details
http://dev.mysql.com/doc/refman/4.1/en/create-table.html

HTH

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Friday 11 November 2005 14:14, Luke wrote:
> Hey all,
>
> I've got a question for you. What would be the best way to set up a set of
> srcipts to create\Update the databse when my app is installed?
>
> The problem is that I obviously can't just drop the database cause if the
> database is already installed then the client would lose all his data. I
> need a way to update the collumns in each table if they change but still
> keep the clients data. Stored Procedures, Views and Triggers are easy cause
> these I can drop then create again but tables are a bit more complex.
>
> I thought of  writing something to the effect of :
>
> USE Information_Schema;
>
> IF (SELECT Table_Name FROM Tables WHERE TableName = '' AND
> Table_Schema = '') IS NULL THEN BEGIN
> {Create Table}
> END;
> ELSE
> BEGIN
> {Alter Table}
> END;
> END IF;
>
> Is there not a better way of doing things? We want to try and create\update
> the database through scripts which can then be deleted instead of writing
> an app.
>
> Any help would be greatly appreciated
>
> Thanx,
> Luke

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Can conditions be mixed with DISTINCT()

2005-10-25 Thread Dobromir Velev
Hi,
I think you could  use something like

SELECT DISTINCT 
(CASE p.ship_status 
WHEN '1' THEN "shipping_now" 
WHEN '2' THEN "shipping_soon" 
ELSE 'unknow'
END) as status
FROM products AS p
INNER JOIN  cart AS i
ON i.product_id = p.id
WHERE i.session_id = " & prepSQL(tConn, tSessionID);

If there are any products for the selected session_id this should return one 
row for every status like this

| status |
+-+
| unknown |
| shipping_now  |
| shipping_soon |


HTH
-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Tuesday 25 October 2005 01:00, Scott Haneda wrote:
> tSql = "SELECT DISTINCT(p.ship_status)
> FROM products AS p
> INNER JOIN  cart AS i
> ON i.product_id = p.id
> WHERE i.session_id = " & prepSQL(tConn, tSessionID);
>
> p.ship_status is either a "1" or a "0", which is just how the database was
> set up ages, ago.  I am moving these to enum() types as I go, but to change
> this one, would break too much stuff.
>
> I would like to toss in a condition to the select so it returns
> "shipping_now" for "1" and "shipping_soon" for "2".  When I do this, I get
> zero results returned.
> --
> -
> Scott HanedaTel: 415.898.2602
> <http://www.newgeo.com> Novato, CA U.S.A.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sum of time?

2005-10-24 Thread Dobromir Velev
I think there should be no problem to use the SUM() function - did you tried 
it like this

select SEC_to_time(SUM(unix_timestamp(TTendTime) - 
unix_timestamp(TTstartTime))) as endtime
FROM TimeTracking
WHERE TTperson = 1  and date(TTstartTime) = '2005-10-19'

HTH

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Monday 24 October 2005 07:48, [EMAIL PROTECTED] wrote:
> I have a table doing time tracking.  I don't use timestamps, I use datetime
> fields to record punch in times and punch out times.  I have this query
> that computes the amount of time between a punch in and punch out:
>
> select SEC_to_time(unix_timestamp(TTendTime) - unix_timestamp(TTstartTime))
> as endtime
> FROM TimeTracking
> WHERE TTperson = 1
> and date(TTstartTime) = '2005-10-19'
>
> And this works great except for when people punch in and out several times
> in one day.  Is there any way I can total a number of records into one
> total time?  In this example case, I am TTperson #1 and I punched in and
> out five times this day.
>
> I know I can do it in the code, but if I can do it in sql, life would be
> better for me.
>
> --ja
>
> --

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: delete with an offset?

2005-10-21 Thread Dobromir Velev
I don't think it is possible at least not with one query. You can limit the 
numbers of record to delete, but at least until 4.1 there is no offset. 
Probably you can do it with a subselect like this

delete from where to_uid=1 and read_timestamp>0 and timestamp<(select 
timestamp from messages  where to_uid=1 and read_timestamp>0 order by 
timestamp desc limit 100,1)

HTH
-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Friday 21 October 2005 11:44, Eric Persson wrote:
> Hi,
>
> Is it possible to delete from a table with some sort of offset? I did
> the following select query, which selects a message from a message table.
>
> select id, subject from messages where to_uid=1 and read_timestamp>0
> order by timestamp desc limit 3,1;
>
> The reason is I only want it to be 3 messages per to_uid, so id thought
> I would be able to delete from the third message and forward with
> something like.
>
> delete from messages where to_uid=1 and read_timestamp>0 order by
> timestamp desc limit 3,1000;
>
> What I basically want is it to delete all messages above 100 for each
> user, any clues on how to solve this?
>
> Best regards,
>  Eric


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fw: query help

2005-10-18 Thread Dobromir Velev
Hi,
The following query will probably work but I think it will be easier to pass 
the minimum date from your application.

SELECT * FROM t WHERE (year(dt)=year(Now()) and dtmonth(Now()))

Also you might want to check the other Date and Time functions

http://dev.mysql.com/doc/refman/4.1/en/date-and-time-functions.html

I'm sure there is a better solution then the above, and the query will be much 
more optimized if you can send the minimum date from your application like 
this.

SELECT * FROM t WHERE dt>$date;

Shawn's idea is also good - I just saw his response using the LAST_DAY() 
function. 

-- 

Dobromir Velev


On Tuesday 18 October 2005 19:01, grKumaran wrote:
> - Original Message -
> From: "grKumaran" <[EMAIL PROTECTED]>
> To: 
> Sent: Tuesday, October 18, 2005 16:08
> Subject: query help
>
> : Hello,
> :
> : CREATE TABLE t (
> : dt datetime
> : )
> :
> : Please assume this is the table structure and contains thousands of
>
> records.
>
> : And I want to list them only last 12 months (that mean last 1 year)
>
> records
>
> : exactly.
> :
> : For that I tried using the following query, but it list sometimes 13
>
> months
>
> : when the current date is in the middle of the month.
> :
> : SELECT * FROM t WHERE DATE_ADD(dt, INTERVAL 1 YEAR) >= NOW();
> :
> : I request you to help me.  And thanking you for the consideration.
>
> Thanking you all people. I think I am not clear in last mail, here I go in
> more detail.
>
> Sample records:
> 2004-05-25
> 2004-06-25
> 2004-07-25
> 2004-08-25
> 2004-09-25
> 2004-10-25
> 2004-11-25
> 2004-12-25
> 2005-01-25
> 2005-02-25
> 2005-03-25
> 2005-04-25
> 2005-05-25
>
> Let us assume we are in any date of May month, then I want the records
> starts from June 01 to the current datetime.
>
> Required query should bring the following result
> -- if we are on 25th day or later of May month
> 2004-06-25
> 2004-07-25
> 2004-08-25
> 2004-09-25
> 2004-10-25
> 2004-11-25
> 2004-12-25
> 2005-01-25
> 2005-02-25
> 2005-03-25
> 2005-04-25
> 2005-05-25
>
> -- if we are before 25th day of May.
> 2004-06-25
> 2004-07-25
> 2004-08-25
> 2004-09-25
> 2004-10-25
> 2004-11-25
> 2004-12-25
> 2005-01-25
> 2005-02-25
> 2005-03-25
> 2005-04-25
>
> Once again thanking you all the people. And forgive me for any mistakes in
> my English.
>
> Sincerely,
> R. Kumaran


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: connection issue

2005-10-13 Thread Dobromir Velev
When you connect without specifing the host, the mysql client will try to 
connect using the mysql socket file (usually /tmp/mysql.sock 
or /var/lib/mysql/mysql.sock) and when you specify the IP address it will try 
to connect using a TCP connection to port 3306. 

My guess is the you have name lookups turned on. If this is the case mysql 
will try to resolve the IP address given and probably the 192.168.3.111  IP 
is not resolving properly. The solution for this is to a add a 

skip-name-resolve

option to your MySQL configuration file.

You might also check http://dev.mysql.com/doc/refman/5.0/en/dns.html and the 
related articles in the MySQL documentation

HTH 
Dobromir Velev

On Thursday 13 October 2005 15:38, Anil wrote:
> Hi List,
>
>
>
> When I am trying to connect to mysql 4.0.20 database it is taking very long
> time  when I specified host like
>
>
>
> Mysql -ux -p -h192.168.3.111
>
>
>
> But it is connecting very quickly when I tried like below
>
>
>
> Mysql -ux -p
>
>
>
>
>
> It is an urgent issue. Please help me.
>
>
>
> Thanks
>
> Anil
>
> DBA

-- 

Dobromir Velev

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Locking Methods

2005-09-08 Thread Dobromir Velev
Hi,
MySQL locks work only until a session expries, so they will not be of much use 
in your case. The best solution will be to implement this logic in your 
application - the simplest method is to add a field to the table which will 
keep the information about whether the specific record is locked or not. Then 
you will have to modify your UPDATE/DELETE queries to not affect locked 
records.

HTH
-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Wednesday 07 September 2005 23:36, Rich wrote:
> Hi there.
>
> I started a different thread on this, but then I realized I might not have
> to use InnoDB for this.
>
> Let me explain what I wish to achieve.
>
> I want to create records in an established table.  I then want them locked
> (either by locking the whole table or by individual record) so that they
> cannot be updated or deleted.
>
> That's it.
>
> I am concerned that locking a table won't allow me to add new records.  I
> also need the records fully viewable.
>
> Which table format should I choose, and how do I implement this?  I've
> reviewed some of the alternatives, and they got all confusing to me.
> 15.11.3 InnoDB and Transaction Isolation Level indicates that READ
> COMMITTED is what I should be looking for, but it refers to an index that
> I'm unaware of, as nothing is indexed.
>
> Any leadership appreciated.
>
> Cheers


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Remote access denial

2005-09-02 Thread Dobromir Velev
Hi,
Does the 10.4.0.101 reverse resolve to a specific hostname? If it does you are 
probably running MySQL with the skip_hostname_resolve option and if it 
doesn't there is no way the MySQL server could tell from which host the 
request is comming.

My opinion is that it is always better to use the IP-addresses instead of the 
hostnames and use skip_hostname_resolve - the connection overhead will be 
smaller this way.

You can also check the comments on
http://dev.mysql.com/doc/mysql/en/dns.html

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Friday 02 September 2005 13:38, T. Horsnell wrote:
> After replacing a 3.23.28 server with 4.1.14 server binaries on an Alpha
> running Tru64 5.1B, I get things like:
>
> ERROR 1045 (28000): Access denied for user 'tsh'@'10.4.0.101' (using
> password: YES)
>
> when trying to connect from a remote host.
>
> The problem goes away if I replace hostnames by IP addresses
> in the mysql 'user' tables. I see from the archives that this
> problem has been reported recently by others. Is there a fix?
> (apart from changing every relevant entry in the user tables)
> I cant see anything relevant in the Buglists.
>
>
> Cheers,
> Terry.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: a difficult join query question

2005-09-02 Thread Dobromir Velev
Hi,
I would rather put the attributes.attribute='pcname' in the join clause
It should look something like this

SELECT inventory.invid,inventory.model,attributes.value as pcname
 FROM inventory
 LEFT JOIN attributes ON (inventory.invid=attributes.invid and 
attributes.attribute='pcname' );

HTH

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Friday 02 September 2005 14:37, Klemens Ullmann wrote:
> hello!
>
> I've got two tables for an IT hardware inventory:
>
> ### table inventory:
> invid  model
> ---
> 1001   HP Notebook// no attributes
> 1002   Dell Desktop   // only one attribut 'pc-name'
> 1003   Acer Laptop// attributes 'pc-name' & 'harddisk'
> 1004   Apple iBook// only one attribut 'harddisk'
>
> ### table attributes:
> id   invid  attribute  value
> -
> 501  1002   pcname atpc01
> 502  1003   pcname atpc02
> 503  1003   harddisk   20GB
> 504  1004   harddisk   40GB
>
> what I want is a list of all computers (without exeptions) in the
> following form:
> invid - modell - pcname
>
> my best guess zu date of a possible query is:
> SELECT inventory.invid,inventory.model,attributes.value as pcname
> FROM inventory
> LEFT JOIN attributes ON (inventory.invid=attributes.invid)
> WHERE attributes.attribute='pcname' or attributes.attribute is NULL;
>
> ### result:
> invid  model pcname
> ---
> 1001   HP Notebook   NULL
> 1002   Dell Desktop  atpc01
> 1003   Acer Laptop   atpc02
>
>
> now my problem are the missing computers which have an attribute, but
> not a 'pc-name'-attribute.
> (in the example above the missing iBook)
>
> thank you for suggestions how to solve the problem!
>
> have a nice day,
>
>  klemens ullmann / vienna

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Choosing a value in a query

2005-08-01 Thread Dobromir Velev
Hi,
You should be able to do a join on both tables and use the IF function in the 
select. It would look something like

select if(Table1.MemberName='', Table2.Member_GroupName, Table1.MemberName) as 
name from Table1 left join Table2 on ...

I cannot guess how your tables are related to each other so you have to write 
the join clause by yourself or just send the structure of both tables and 
what data you want in the result set and will write the SQL for you.

And one other thing - what you mean by "If Member_Name is not a string" - 
should the value of Member_Name be empty, NULL or a numeric value will 
satisfy this condifition also?

HTH

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Sunday 31 July 2005 16:37, Martin Lancaster wrote:
> Hi all,
>
> I am using MySQL 4.1.11nt
> I have two MyISAM tables:
>
> Table 1 holds usage information about users accessing various parts of the
> database.
> Relevant fields are "Users_Code", "Users_action" and "Affected_Member".
>
> Table 2 holds personal information about the Member.
> Relevant fields are "Member_Code", "Member_Surname" and "Member_Groupname"
>
> Table1.Affected_Member is the link to Table2.Member_Code.
> Table1.Affected_Member can be a null field.
>
> My application is coded so that if Table2.Member_Surname is null, there
> will be a string value in Table2.Member_Groupname. The reverse is also
> true. The application is also coded so that Table2.Member_Surname and
> Table2.Member_Groupname cannot both be null, nor both have a value.
>
> I am trying to code the following:
> 1. If Table1.Affected_Member is not null, then get Table2.Member_Surname
> where Table1.Affected_Member = Table2.Member_Code, assigning this to the
> output of the query as Member_Name.
>
> 2. If Member_Name is not a string then let Member_Name =
> Table2.Member_GroupName
>
> This will give the result that if Table1.Affected_Member is not null, then
> the returned value of Member_Name will be either the Surname of the Member
> or the Groupname of the Member.
>
> Although I am having success with coding part 1. of the query, I cannot get
> Part 2 of the query to give the required result.
>
> I can take the resultset from just running a query to get the information
> from table 1, and then go through this result set, running further queries
> to get the information required from Table 2, but, as I see it, this will
> have to be a new query for each record in the Table1.Resultset, which will
> take up a lot of bandwidth, and make the application slow.
>
> Is it possible to code this into one single query that returns all of the
> values required?
>
> Many thanks for your help!
>
> Martin
> --
> --
> [EMAIL PROTECTED]
> --
>
> This email, and any attachments, has been scanned for virus contamination
> using Norton Anti-Virus 2002

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Persistent Corruption

2005-07-28 Thread Dobromir Velev
Hi,
Isn't there some information in the MySQL error log about what might have 
caused the corruption. Normally this would happen when disk or memory problem 
occurs.


On Thursday 28 July 2005 17:45, Chris McKeever wrote:
> MySQL 4.0.16
>
> I am having this annoying persistent corruption issue, and am
> wondering if anyone has any suggestions.  I have two tables that
> refuse to stay clean.  After a myisamchk (below) they show corruption.
>  I run a myisamchk -r, they get fixed, and the next day, they are once
> again corrupt.
>
> Finally, I did a mysqldump, dropped the tables, imported the data from
> the dump, and the next day - corrupt.
>
> I am at a loss, I thought the brute force method should clean it.
>
> I dont think it is the applicaiton itself, since there are an
> identical 24 tables (alphabet) that do not have this issue.
>
> Any help would be appreciated - Chris
>
> Checking MyISAM file: EmailMessage_c.MYI
> Data records:   79196   Deleted blocks:  22
> myisamchk: warning: 1 clients is using or hasn't closed the table properly
> - check file-size
> - check key delete-chain
> - check record delete-chain
> - check index reference
> - check data record references index: 1
> - check record links
> myisamchk: error: Record-count is not ok; is 79197Should be: 79196
> myisamchk: warning: Found 457872 deleted space.   Should be 459588
> myisamchk: warning: Found  79507 partsShould be: 79506
> parts MyISAM-table 'EmailMessage_c.MYI' is corrupted
> Fix it using switch "-r" or "-o"
>
>
>
> Checking MyISAM file: EmailMessage_j.MYI
> Data records:   39907   Deleted blocks:  91
> myisamchk: warning: 1 clients is using or hasn't closed the table properly
> - check file-size
> - check key delete-chain
> - check record delete-chain
> - check index reference
> - check data record references index: 1
> - check record links
> myisamchk: error: Record-count is not ok; is 39909Should be: 39907
> myisamchk: warning: Found 719032 deleted space.   Should be 742328
> myisamchk: warning: Found 89 deleted blocks   Should be: 91
> myisamchk: warning: Found  40195 partsShould be: 40193
> parts MyISAM-table 'EmailMessage_j.MYI' is corrupted
> Fix it using switch "-r" or "-o"
>
>
> --
> --
> please respond to the list .. if you need to contact me direct
> cgmckeever is the account
> prupref.com is the domain
>
> http://www.prupref.com";>Simply Chicago Real Estate

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem Escaping a Boolean Query

2005-07-27 Thread Dobromir Velev
I ran into this problem a few months ago and the only workaround  I could 
think of was to escape the quotes in the table with """. Then your query 
should be something like this

select * from feeds where MATCH(feed_title, feed_content) AGAINST('"Sean
"P. Diddy" +Combs"' IN BOOLEAN MODE) order by feed_date DESC


A diiferent solution is to put  a "+" sign before every word in the search 
phrase like 

select * from feeds where MATCH(feed_title, feed_content) AGAINST('+Sean
+"P. Diddy" +Combs' IN BOOLEAN MODE) order by feed_date DESC

but it is not as accurate and returns more irrelevant results

HTH
-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Wednesday 27 July 2005 03:47, Blogfreaks.com wrote:
> I'm using a boolean query to perform an exact match on musicians within a
> text field. However, if the musician's name contains a quote, I get
> inaccurate results. For example, this query works fine:
>
> select * from feeds where MATCH(feed_title, feed_content) AGAINST('"Tom
> Petty"' IN BOOLEAN MODE) order by feed_date DESC
>
> The above query returns all feeds referencing the phrase "Tom Petty". I run
> into a problem when the musician has quotes in their name:
>
> select * from feeds where MATCH(feed_title, feed_content) AGAINST('"Sean
> \"P. Diddy\" Combs"' IN BOOLEAN MODE) order by feed_date DESC
>
> I'm trying to escape "Sean "P. Diddy" Combs" with backslashes, but it
> doesn't work. This query returns every single row in my table, and it takes
> about 11 seconds to run. The other queries take a fraction of a second.
>
> Any help would be greatly appreciated. Thanks!
>
> Shaun
> http://www.blogfreaks.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: all user command

2005-07-27 Thread Dobromir Velev
I think what you are looking for is 

show processlist;

http://dev.mysql.com/doc/mysql/en/show-processlist.html

You could also use Jeremy Zawodny's mytop 
(http://jeremy.zawodny.com/mysql/mytop/) or any of the MySQL administraion 
software available on the Internet



On Wednesday 27 July 2005 04:21, Joeffrey Betita wrote:

> hello
>   what command should i type to see all the user connected to the database.
> thank you very much.
>
>
>
>
> rgds,
> Joeffrey

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Netapp Snapshot

2005-07-27 Thread Dobromir Velev
Hi,
This procedure works for me and until now I haven't had any problems with it. 
You should b aware that if your database is under heavy load you might 
experience problems with transactions timing out due to the lock.

HTH

On Wednesday 27 July 2005 09:14, Jeff Richards wrote:
> Hi,
>
> I need a procedure that will allow me to take an online (i.e. database
> up) Netapp Snapshot of a large InnoDB-based database. Could someone
> please confirm that this is a valid way of making a Snapshot:
>
> * Issue a "flush tables with read lock;"
> * Issue a "flush logs;"
> * Create the Snapshot
> * Issue an "unlock tables;"
>
> The reason I ask is that this list is the only place I can find
> reference to this procedure. Everywhere else talks about either
> mysqldump or the InnoDB Hot Backup utility for making online backups.
>
> Thanks in advance!
>
> Jeff
>
> --
> Jeff Richards
> Consulting Architect
> Openwave Systems Asia Pacific
> +61 415 638757

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



AUTO_INC lock

2005-06-02 Thread Dobromir Velev
Hi,
Can somebody tell me what the AUOT_INC lock is doing. When my db is 
experiencing high load I see a lot of these in the INNODB status monitor.

I'm doing a lot of inserts, and from time to time I need to generate some 
reports that are based on a large amount of data, and then the MySQL server 
starts locking the inserts. The strange thing it is locking even inserts to 
tables that are not used in the report.


Here is some data from the INNODB monitor

--
---TRANSACTION 2 559663507, ACTIVE 7 sec, process no 27751, OS thread id 
120930675 setting auto-inc lock
mysql tables in use 1, locked 1
LOCK WAIT 1 lock struct(s), heap size 320
MySQL thread id 25414071, query id 168565699 xx.xx.xx.xx user update
INSERT INTO slogs VALUES 
(NULL,29837,'OK','0.00','100.97','102.12','103.16','N',Now(),0.131011*1000)
--- TRX HAS BEEN WAITING 7 SEC FOR THIS LOCK TO BE GRANTED:
TABLE LOCK table `db_name/slogs` trx id 2 559663507 lock mode AUTO-INC waiting

-- 
Dobromir Velev

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Lost connection to MySQL server during query - on long queries

2005-06-01 Thread Dobromir Velev
_flush_method', ''
> 'innodb_force_recovery', '0'
> 'innodb_lock_wait_timeout', '50'
> 'innodb_locks_unsafe_for_binlog', 'OFF'
> 'innodb_log_arch_dir', ''
> 'innodb_log_archive', 'OFF'
> 'innodb_log_buffer_size', '1048576'
> 'innodb_log_file_size', '5242880'
> 'innodb_log_files_in_group', '2'
> 'innodb_log_group_home_dir', './'
> 'innodb_max_dirty_pages_pct', '90'
> 'innodb_max_purge_lag', '0'
> 'innodb_mirrored_log_groups', '1'
> 'innodb_open_files', '300'
> 'innodb_table_locks', 'ON'
> 'innodb_thread_concurrency', '8'
> 'interactive_timeout', '100'
> 'join_buffer_size', '131072'
> 'key_buffer_size', '8388600'
> 'key_cache_age_threshold', '300'
> 'key_cache_block_size', '1024'
> 'key_cache_division_limit', '100'
> 'language', '/usr/share/mysql/english/'
> 'large_files_support', 'ON'
> 'license', 'GPL'
> 'local_infile', 'ON'
> 'locked_in_memory', 'OFF'
>
> 'long_query_time', '10'
> 'low_priority_updates', 'OFF'
> 'lower_case_file_system', 'OFF'
> 'lower_case_table_names', '1'
> 'max_allowed_packet', '1047552'
> 'max_binlog_cache_size', '4294967295'
> 'max_binlog_size', '1073741824'
> 'max_connect_errors', '10'
> 'max_connections', '100'
> 'max_delayed_threads', '20'
> 'max_error_count', '64'
> 'max_heap_table_size', '16777216'
> 'max_insert_delayed_threads', '20'
> 'max_join_size', '4294967295'
> 'max_length_for_sort_data', '1024'
> 'max_relay_log_size', '0'
> 'max_seeks_for_key', '4294967295'
> 'max_sort_length', '1024'
> 'max_tmp_tables', '32'
> 'max_user_connections', '0'
> 'max_write_lock_count', '4294967295'
>
> 'net_buffer_length', '16384'
> 'net_read_timeout', '30'
> 'net_retry_count', '10'
> 'net_write_timeout', '60'
> 'new', 'OFF'
> 'old_passwords', 'OFF'
> 'open_files_limit', '1024'
> 'pid_file', '/var/lib/mysql/mendelson.lotonet.local.pid'
> 'port', '3306'
> 'preload_buffer_size', '32768'
> 'protocol_version', '10'
> 'query_alloc_block_size', '8192'
> 'query_cache_limit', '67108864'
> 'query_cache_min_res_unit', '4096'
> 'query_cache_size', '67108864'
> 'query_cache_type', 'ON'
> 'query_cache_wlock_invalidate', 'OFF'
> 'query_prealloc_size', '8192'
> 'range_alloc_block_size', '2048'
> 'read_buffer_size', '131072'
> 'read_only', 'OFF'
> 'read_rnd_buffer_size', '262144'
> 'relay_log_purge', 'ON'
> 'relay_log_space_limit', '0'
> 'rpl_recovery_rank', '0'
> 'secure_auth', 'OFF'
> 'server_id', '0'
> 'skip_external_locking', 'ON'
> 'skip_networking', 'OFF'
> 'skip_show_database', 'OFF'
> 'slave_net_timeout', '3600'
> 'slave_transaction_retries', '0'
> 'slow_launch_time', '2'
> 'socket', '/var/lib/mysql/mysql.sock'
> 'sort_buffer_size', '2097144'
> 'sql_mode', ''
> 'storage_engine', 'MyISAM'
> 'sql_notes', 'OFF'
> 'sql_warnings', 'OFF'
>
> 'table_cache', '64'
> 'table_type', 'MyISAM'
> 'thread_cache_size', '0'
> 'thread_stack', '131072'
>
> 'time_zone', 'SYSTEM'
> 'tmp_table_size', '33554432'
> 'tmpdir', ''
> 'transaction_alloc_block_size', '8192'
> 'transaction_prealloc_size', '4096'
> 'tx_isolation', 'REPEATABLE-READ'
> 'version', '4.1.11-standard-log'
> 'version_comment', 'MySQL Community Edition - Standard (GPL)'
> 'version_compile_machine', 'i686'
> 'version_compile_os', 'pc-linux-gnu'
> 'wait_timeout', '28800'
>
>
> Java error log
>
> com.mysql.jdbc.CommunicationsException: Communications link failure due
> to underlying exception:
>
> ** BEGIN NESTED EXCEPTION **
>
> java.io.EOFException
>
> STACKTRACE:
>
> java.io.EOFException
> at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1842)
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2341)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2784)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:750)
> at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1251)
> at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2209)
> at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:413)
> at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:1899)
> at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1347)

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB locking issues

2005-02-02 Thread Dobromir Velev
Hi,
I'm running quite a large database - mostly inserts (4-5 million rows a day) 
and an occasional select to generate some report based on the inserted data 
which works very well.
Once a month I need to generate a larger number of reports. The SELECT queries 
are optimized quite well but some of the reports require to group a large 
number of records together which could take up to 5 minutes, but that is OK 
for me. The problem is that when more than a certain number (between 4 and 8 
- but I can find any stable pattern) of SELECT queries are working 
concurrently InnoDB starts to lock the threads that insert data and in a very 
short time MySQL uses all available connections. I couldn't find any reasons 
why InnoDB is locking INSERT threads which try to insert in tables different 
then the ones that the reports are SELECT-ing from.

The InnoDB monitor shows a lot of transactions similar to this

---TRANSACTION 1 4271824648, ACTIVE 51 sec, process no 12904, OS thread id 
712286614 setting auto-inc lock
mysql tables in use 1, locked 1
LOCK WAIT 1 lock struct(s), heap size 320
MySQL thread id 485399224, query id 3713824274 69.59.185.156 websitepulse9 
update
INSERT INTO slogs8 VALUES 
(NULL,24801,'OK','0.00','62.40','62.56','62.82','N',Now(),0.479633*1000)
--- TRX HAS BEEN WAITING 51 SEC FOR THIS LOCK TO BE GRANTED:
TABLE LOCK table `websitepulse/slogs8` trx id 1 4271824648 lock mode AUTO-INC 
waiting

Normally there are less than 200 running threads (1-2 active) but to avoid 
hitting the connections limit due to this locking problem I had to increase 
the maximum connections number to 600 and respectively had to decrease the 
size of the per-thread memory buffers  which slows down the selects 
additionaly.

I'll probably use a second server to replicate the database and run the 
reports from there but I wanted to see if somebody might had a different 
suggestion.


Here is the servers info

MySQL: 4.0.21-standard Official MySQL RPM
OS : Red Hat Enterprise Linux ES 3  
Memory : 4 GB DELL RAM
Processor : Dual 3.06 GHz Intel Xeon
RAID Configuration : RAID 1 146 GB SCSI

Here is my /etc/my.cnf file
[mysqld]
port=3306
skip-name-resolve
log-bin=/var/lib/mysql/mysql
log-slow-queries=/var/lib/mysql/slow.queries.log
socket=/var/lib/mysql/mysql.sock
myisam-recover=BACKUP,FORCE
set-variable = max_connect_errors=10
innodb_data_home_dir =
innodb_data_file_path =/var/lib/mysql/innodbfile:100M:autoextend
innodb_log_group_home_dir = /var/log/innologs
innodb_log_arch_dir = /var/log/innologs
set-variable = innodb_buffer_pool_size=1700M
set-variable = innodb_additional_mem_pool_size=20M
set-variable = innodb_log_file_size=150M
set-variable = innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=0
set-variable = innodb_lock_wait_timeout=50
set-variable = key_buffer_size=500M
set-variable = read_buffer_size=500K
set-variable = read_rnd_buffer_size=1200K
set-variable = sort_buffer_size=1M
set-variable = thread_cache=256
set-variable = thread_concurrency=8
set-variable = thread_stack=126976
set-variable = myisam_sort_buffer_size=64M
set-variable = max_connections=600
set-variable = table_cache=1
set-variable = wait_timeout=2000

Any suggestions are welcome.
-- 
Mark J.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Out of my depth.

2004-11-10 Thread Dobromir Velev
Hi,
Is there anything your mysql errog log? Did MySQL crashed or stopped 
unexpectedly while the script was running? I guess it is probably something 
with your memory usage configuration - please send your my.cnf file and on 
what machine you are running your MySQL server.

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Tuesday 02 November 2004 12:45, John Smith wrote:
> Hi All,
>
> I have built a search application in php/libcurl and I store its results in
> MySQL.
>
> The problem is that I am not used to dealing with the sizes of tables my
> search application produces, roughly around 400,000 rows in a table the
> last time I got ran it correctly.
>
> Right to my problem.
>
> I fork 20 versions of my spider at a time, these forks all have a
> connection to the database and do selects, inserts and updates on the same
> table at once. When its going at full steam I can be inserts 1000s of rows
> a minute.
>
> I am using MyISAM as I need its fulltext search cabablities. I remove the
> fulltext index before I start any changes to the table.
>
> I am finding that my php script is not being able to select from the
> database at random points, I have tracked this down to a 127 erros, the
> table it corrupt.
>
> Before I start my spiders (before it forks) I run myisamck -r on my .MYI
> file but it corrupts during the scripts execution time and this means it is
> no longer able to select from the DB (Curcial to know if its needing
> updated or inserted as a new record)
>
> Any hints, any more information needed from me etc would be great.
>
> My table struture is:
>
> CREATE TABLE thetable (
>   id int(11) NOT NULL auto_increment,
>   sid int(11) NOT NULL default '1',
>   pid varchar(14) NOT NULL default '0',
>   tid varchar(255) NOT NULL default '',
>   cid varchar(255) NOT NULL default '',
>   location text NOT NULL,
>   number int(14) NOT NULL default '0',
>   image text NOT NULL,
>   description text NOT NULL,
>   link text NOT NULL,
>   uo tinyint(1) NOT NULL default '0',
>   sd tinyint(1) NOT NULL default '0',
>   added int(14) NOT NULL default '0',
>   new tinyint(4) NOT NULL default '1',
>   old tinyint(4) NOT NULL default '0',
>   PRIMARY KEY  (id),
>   KEY sid (sid),
>   KEY old (old),
>   KEY new (new),
>   KEY sd (sd),
>   KEY uo (uo),
>   KEY pid (pid),
>   KEY tid (tid),
>   KEY cid (cid)
> )
>
> Ta,
> John
>
> ___
> Have your own email and web address for life.
>
> http://www.homemaster.net - Homemaster. Come Together. Online.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: List of Dates Grouped by Week

2004-10-26 Thread Dobromir Velev
Hi,
You can use the WEEK function
http://dev.mysql.com/doc/mysql/en/Date_and_time_functions.html

and the query will look like 

select week(Date) as weekID,User_ID,sum(hours) from Timesheets group by 
weekID,User_ID order by weekID;

You will have to do some additional math in your application to retrieve the 
dates when a week starts/ends but this should nto be a problem

HTH
-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


On Tuesday 26 October 2004 16:28, shaun thornburgh wrote:
> Hi,
>
> I am creating an online timesheet application. Most parts are done, however
> I have a problem displaying a list of unapproved timesheets.
>
> Here is my timesheet table:
>
> mysql> DESCRIBE Timesheets;
> +---+-+--+-++--
>--+
>
> | Field | Type| Null | Key | Default| Extra
>
> +---+-+--+-++--
>--+
>
> | Timesheet_ID  | int(11) |  | PRI | NULL   |
> | auto_increment
> |
> | Type  | varchar(40) | YES  | | NULL   |
> |
> | Project_ID| int(11) | YES  | | NULL   |
> |
> | User_ID   | int(11) |  | | 0  |
> |
> | Hours | float   |  | | 0  |
> |
> | Date  | date|  | | -00-00 |
> |
> | Status| varchar(40) | YES  | | Open   |
>
> +---+-+--+-++--
>--+
>
> When a timesheet is submitted for approval the manager logs in and approves
> / rejects the timesheets. How can I display a list of unapproved timesheets
> grouped by week and user? i.e.
>
> Week 1 - Bill - 45 Hours
> Week 1 - Fred - 40 Hours
> Week 2 - Bill - 45 Hours
> Week 2 - Fred - 40 Hours
> Week 2 - Sam  - 12 Hours
>
> Thanks for your help.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Messure time including milliseconds

2004-10-13 Thread Dobromir Velev
Hi,
I think it is better to implement this measurment in your application - thus 
you'll have an estimate of the time needed to execute your procedures 
including the mysql calls.

Otherwise you can use the BENCHMARK MySQL function 
http://mysql.online.bg/doc/mysql/en/Information_functions.html

for example

mysql> select benchmark(1000,procedure_name);
+--+
| benchmark(1000,procedure_name) |
+--+
|0 |
+--+
1 row in set (0.15 sec)


the average time to execute the procedure will be 0.15/1000 = 0.00015 seconds

HTH
-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


On Tuesday 12 October 2004 19:47, Thomas Schager wrote:
> Hi,
>
> I need to messure the time needs of my procedures in MySQL. How can I
> reach a messurement of time intervals that include milliseconds, written
> in MySQL SQL statements?
>
>
> Thanks for any ideas,
>
> Thomas


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimize queries

2004-10-08 Thread Dobromir Velev
Hi,
Using sub-selects (MySQL 4.1 and higher) you can use something like

select count(*)/(select count(*) from table where field1='myvalue') as 
percentage from table where category='myvalue' group by category;

but I don't think you will gain much in performance this way. I'd rather use 
two queries - one for the total and one for the percentages. If field1 is 
indexed 

select count(*) from table where field1='myvalue' 

should be quite fast, so I don't think you should worry about having an 
additional query.

HTH
-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

> and from a script I calculated my percentage = bigToal/categoryTotal

On Friday 08 October 2004 10:48, Jacques Jocelyn wrote:
> Hello there,
>
> Until  I  had  query  time  restriction  due to my web hoster. Meaning
> whenever the query was too long, it gets killed !
> Now I have changed my webhoster, I'd like to optimize my queries and I
> would like to have your opinion on that.
> I  wrote multiple queries to prevent any long query time duration such
> as :
> until now I did the following
> - to obtain the total of item which match requirements
> -> select count(*) 'bigTotal' from table where field1='myvalue'
>
> then  I  selected  total  of   category from the same table to get the
> percentage of the total
> such as :
> -> select count(*) 'categoryTotal' from table where category1='myvalue'
> and from a script I calculated my percentage = bigToal/categoryTotal
>
> -> select count(*) 'categoryTotal' from table where category2='myvalue'
> and from a script I calculated my percentage = bigToal/categoryTotal
> etc..
>
> now, I have planned the following : create ONE query to do all this.
> is there a way then to merge the two previous queries in only one ?
>
> and calculate the percentage at the same time ?
> To  merge all my category queries, I can use a GROUP BY, but what
> about the bigTotal, can have that in the same query ?
>
> Please advise.
> Thanks
>
>
> Best regards,
> Jacques Jocelyn


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimizing InnoDB tables

2004-10-08 Thread Dobromir Velev
Hi,
According to the manual - 
http://dev.mysql.com/doc/mysql/en/OPTIMIZE_TABLE.html
http://dev.mysql.com/doc/mysql/en/InnoDB_File_Defragmenting.html

running a null ALTER statement - ALTER TABLE tbl-name type=INNODB; will 
rebuild the table thus optimizing the way the table is written to the disk. 
It will fix the physical ordering of the index pages on the disk thus 
improving the time MySQL needs to perform an index seek. It will not decrease 
the space used by the INNODB file but it could speed things up. If you want 
to regain some of the space used by the INNODB file you will have to convert 
all INNODB tables to MYISAM (or dump them to a SQL file), recreate the INNODB 
file (s) and then recreate the original INNODB tables. This process could 
take a  lot of time depending on the size of your tables so you should 
proceed with care.


HTH

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Thursday 07 October 2004 22:07, Boyd E. Hemphill wrote:
> The documentation is not clear on this point.  Here is a quote:
>
> 'For BDB tables, OPTIMIZE TABLE is currently mapped to ANALYZE TABLE. It
> was also the case for InnoDB tables before MySQL 4.1.3; starting from this
> version it is mapped to ALTER TABLE.'
>
> What is meant by its being mapped to ALTER TABLE?  Too, what exactly
> happens after 4.1.3?  Is space, in fact, recovered and defragged?
>
> Thanks for your time!
>
> Best Regards,
> Boyd E. Hemphill
> MySQL Certified Professional
> [EMAIL PROTECTED]
> Triand, Inc.
> www.triand.com
> O:  (512) 248-2278
> M:  (713) 252-4688
>
> -Original Message-
> From: Christopher L. Everett [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, October 06, 2004 6:23 PM
> To: 'Mysql List'
> Subject: Re: Repeated corruption with MySQL 4.1.x using FULLTEXT indexes
>
> Ed Lazor wrote:
> >>-Original Message-
> >>From: Christopher L. Everett [mailto:[EMAIL PROTECTED]
> >>Sent: Wednesday, October 06, 2004 1:47 AM
> >>To: Mysql List
> >>Subject: Repeated corruption with MySQL 4.1.x using FULLTEXT indexes
> >>
> >>I have an application where I create a faily large table (835MB) with a
> >>fulltext index.  One of our development workstations and our production
> >>server will run the script to load the table, but afterwards we have a
> >>pervasive corruption, with out of range index index pointer errors.
> >>Oddly, my development workstation doesn't have those problems.
> >>
> >>My box and the ones having the problems have the following differences:
> >>
> >>  - my box runs ReiserFS, the problem boxes run XFS
> >>  - my box has a nice SCSI HD subsystem, the problem boxes do IDE.
> >>
> >>All three boxes run Linux 2.6.x kernels, and my workstation and
> >> production server share the same mobo.  Come to think of it, I saw
> >> similar corruption issues under 2.4.x series kernels and MySQL v4.0.x,
> >> it just wasn't the show stopper it is now.
> >>
> >>Also, on all three boxes, altering the table to drop an index and create
> >>a new one requires a "myisamchk -rq" run afterwards when a fulltext index
> >>either exists or gets added or dropped, which I'd also call a bug.
> >
> >The problems you're describing are similar to what I've run into when
> > there have been hardware related problems.
> >
> >One system had a problem with ram.  Memory tests would test and report ram
> >as ok, but everything started working when I replaced the ram.  I think it
> >was just brand incompatibility or something odd, because the ram never
> > gave any problems in another system.
>
> I can generate the problem on much smaller data sets, in the mid tens of
> thousands of records rather than the millions of records.
>
> I'll do a memtest86 run on the development boxes overnight, but as I did
> that
> just after I installed linux on them and used the linux badram patch to
> exclude
> iffy sections of RAM, I don't think thats a problem.
>
> >One system had hard drive media slowly failing and this wasn't obvious
>
> until
>
> >we ran several full scan chkdsks.
>
> 3 hard drives all of different brand, model & size, and the problem
> happening
> in the same place on both?  Not likely.
>
> >The funniest situation was where enough dust had collected in the CPU fan
>
> to
>
> >cause slight over heating, which resulted in oddball errors.
>
> This isn't a problem on my box.  I have a 1.5 pound copper heatsink with a
> 90mm heat sensitive fan and a fan+heatsink for the hard drive, and I saw
> myisamchk consis

Re: HELP ME WITH THIS

2004-09-03 Thread Dobromir Velev
Hi,
The only way I can think of is to join the table to itself. It should look 
something like this:

select unix_timestamp(concat(s.date,' ',s.time)) - 
unix_timestamp(concat(e.date,' ',e.time)) 
from table s left join table e on 
( s.CallingStationId=e.CallingStationId and 
s.CalledStationId=e.CalledStationId and e.AcctStatusType='Stop' ) 
where s.AcctStatusType='Start' 
order by s.date,s.time;

I haven't tested it so you will  probably need to play a bit with the 
date/time formating but it should be enough to give you the idea. It will only 
work if there is only one session for each CallingStationId and 
CalledStationId pair, otherwise you will receive a lot of irrelevant results.


-- 

Dobromir Velev

On Friday 03 September 2004 15:21, Peter J Milanese wrote:
> If it were all in one row, you may be able to compare datetime fields.
>
> I do not know if you can do this with 2 rows, and the query will probably
> be rough.
>
> Did you design the table? Can you create it so that your row has start and
> stop times, instead of creating another row?
>
> > -Original Message-
> > From: Karma Dorji [mailto:[EMAIL PROTECTED]
> > Sent: Friday, September 03, 2004 5:06 AM
> > To: [EMAIL PROTECTED]
> > Subject: HELP ME WITH THIS
> >
> > Hello can anyone help me with this,
> >
> > i have a table, like the one below,
> > i need to find the time difference between the Start and Stop from a
> > particular CallingStationId to particular CalledStationId.
>
> ++--++---+
>
> > --
> > +---+
> >
> > | Date   | Time | CallingStationId   | CalledStationId   |
> >
> > AcctStatusType   | AcctSessionTime   |
>
> ++--++---+
>
> > --
> > +---+
> >
> > | 09/01/2004 | 17:28:27 | 02327125   | 00018151635   | Start
> > |
> > | 09/01/2004 | 19:00:34 | 02320176   | 01181471822125| Start
> > |
> > | 09/01/2004 | 19:10:08 | 17113080   | 01022586815   | Start
> >
> >  09/01/2004 | 20:28:27 | 02327125   | 00018151635   | Sop
> >
> > | 09/01/2004 | 21:00:34 | 02320176   | 01181471822125| Stop
> > |
> > | 09/01/2004 | 22:10:08 | 17113080   | 01022586815   | Stop
> >
> > Thanking you all in advance.
> >
> > Karma
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Innodb space - Different Question

2004-09-03 Thread Dobromir Velev
Hi,
This is the free space left in the InnoDB file. InnoDB will claim all the 
space that it is configured for and if autoextend is turned on it will use 
additional space when needed until the file limit is reached.

For example my current configuration is

innodb_data_file_path =/var/lib/mysql/innodbfile:100M:autoextend:max:60G

When Innodb first started it created a 100 MB file and with the time this file 
has growed to 20GB. For more information check

http://dev.mysql.com/doc/mysql/en/InnoDB_configuration.html

HTH

Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Friday 03 September 2004 11:47, Stuart Felenstein wrote:
> Since the recent thread with Innodb table space, I
> have had a concern for a while.
>
> First, I use Navicat to administer mysql.  This
> includes table creation.
> I don't notice this using a myisam table but when I
> set up an innodb I find this item:
>
> InnoDB free: 10240 kB
>
> Does this mean the actual space provided for the
> records it can hold ?
>
> I was concerned, then someone (not very knowledgable)
> told me not to be concerned.  I'm concerned.
>
> Thank you,
> Stuart

-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql and errno 13 on RedHat

2004-09-01 Thread Dobromir Velev
HI again,
This multiple mysql proccesses are nothing to worry about - these are the 
threads that mysql opens to accept incoming connections. You can configure 
the maximum number of connections and the mysql memory usage in the 
/etc/my.cnf file.

On Wednesday 01 September 2004 02:14, Yannick Warnier wrote:
> Le mar 31/08/2004 à 16:21, Dobromir Velev a écrit :
> > Hi,
> > Make sure that the mysql user have permissions to write into the folder
> > /var/lib/mysql (or whatever your data folder is) .
>
> Thanks Eric and Dobromir. This was a problem of permissions on
> /var/lib/mysql indeed. Although it did not work immediatly after a
> change, I saw (how awful) that the mysql version was a 3.23 or something
> and replaced by the recommended version 4.
>
> Now it all works perfectly (although mysqld is now divided in ten
> different processes, but I don't know why and I don't mind as long as it
> works).
>
> Thanks,
> Yannick

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql and errno 13 on RedHat

2004-08-31 Thread Dobromir Velev
Hi,
Make sure that the mysql user have permissions to write into the folder 
/var/lib/mysql (or whatever your data folder is) .

HTH

On Tuesday 31 August 2004 18:00, Yannick Warnier wrote:
> Hi there,
>
> I need to install a database on a RedHat system. Although I'm pretty at
> ease with Debian, I feel a little unconfortable and cannot find why I
> get an Error: errno 13 (Permission Denied) when I try to create a new
> database.
>
> Is there some place to look at first? I cannot find any way to do this
> easily. There is no strace installed on the system (nor any possibility
> to do so easily).
>
> Any idea?
>
> Thanks,
>
> Yannick

-- 

Dobromir Velev


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Sort by COUNT(field_name) ?

2004-08-27 Thread Dobromir Velev
You can use a query like this one

SELECT string, COUNT(string) as co FROM searchstat GROUP BY string order by co 
DESC;

HTH 

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Friday 27 August 2004 16:03, Dialogcentret wrote:
> Example: I have a search box on my webpage and all searchstring are saved
> in a database.
>
> $foresp = mysql_query("SELECT string, COUNT(string) FROM searchstat GROUP
> BY string ",$db);
>
> while ($data = mysql_fetch_array($foresp)) {
>  echo "($data[1]) " . str_replace('<', '<', $data[string]) . " ";
> }
>
> However, I would like to write the result out with the most popular search
> strings first. Is it possible to sort the output by count(field_name) ?
>
>
>
> Med venlig hilsen
> Birger Langkjer
> Dialogcentret



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: PHP / Mysql people question

2004-08-26 Thread Dobromir Velev
Hi,
First mysql_fetch_row() returns an array so it doesn't make much sense to 
compare it to 5. If you want to check tne number of records the query has 
returned you should use mysql_num_rows() - like this

if(mysql_num_rows($result)>=5){ }

or if you want to check the value returned by the query something like this 
should work though I would personally add some additional error testing.

if(mysql_result($result,0,0)>=5){ }

HTH
-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


On Wednesday 25 August 2004 18:02, Stuart Felenstein wrote:
> Thought earlier that I could get by on just plain SQL.
>
> Php needed.  I'm getting a parse error on the "if"
> line.
> This is supposed to be a trigger then before
> transaction counts the number of rows , id already
> inserted, if exceeded , kick back error message , halt
> transaction.
> Not sure where I need to go with this statement to
> make it work. Ideas , help, suggestions welcome!
>
> //start trigger Trigger1
> function KT_TriggerSTARTER_Trigger1(&$tNG) {
> $result = mysql_query("SELECT LurkTitleTim.LurkID
> WHERE
> LurkID='$new_input'");
>
> if ($row=mysql_fetch_row($result) >= 5) {  error- data
> already exists   }
>
> else { go ahead and add data with INSERT statement
> }
> }
>
> Thank you ,
> Stuart



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: C API -- huge result sets slowin me down

2004-06-29 Thread Dobromir Velev
Hi, 
The only thing that could slow you down is that the genAttrib array will take 
more and more memory as the result set grows. I would recommend you to create 
a function that uses the mysql row directly instead of creating this huge 
array.

something like

while ((row = mysql_num_rows(result))){
 usedata(row);
}

Of course it depends on what do you need the mysql data for - but if you can 
make it to use one row at a time it should run a lot more faster.

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


On Tuesday 29 June 2004 08:50, Matt Eaton wrote:
> Hi all,
>
> I was hoping this was the right place for a question about the C API.
> I've been grabbing result sets from tables in the C API for a few years
> now, but I'm starting to work with result sets that are big enough to
> bog me down.  Of course, the result sets aren't insanely big, so I was
> wondering why it was taking so long for me to suck them in to C,
> especially when I can run the same query from the command line using the
> binaries and they can cache it to a file on the hard disk pretty much
> instantly.  So, basically, I was just hoping that I've been doing
> something wrong, or at least that there was something I could do better,
> to make my database communication as fast as the mysql command line
> tools.  I've checked out their source and nothing obvious jumps out at
> me.  Here's a non-functional sample of my code:
>
> int main(int argc, char *argv[] ) {
>   int uid;
>   int sid;
>   char sqlBuff[4000];
>   int err = 0;
>   int i;
>   // Setup the database communications space:
>   MYSQL dbase;
>   MYSQL_RES *result;
>   MYSQL_ROW row;
>
>   float **genAttrib;
>
>   //... snip ...
>
>
>   // Connect to the database:
>   if (mysql_init(&dbase) == NULL) err = 1;
>   else {
>
>
> if(mysql_real_connect(&dbase,"localhost","login","pass","test",0,NULL,CL
> IENT_FOUND_ROWS) == NULL) {
>   err = 1;
>   fprintf(stderr, "Failed to connect to database:
> Error: %s\n",
>   mysql_error(&dbase));
>   }
>   }
>
>   // If the connection couldn't be established:
>   if(err) {
>   printf("db connection failed!\n");
>   exit(1);
>   }
>
>
>   //... snip ...
>
>   // This query could have as many as a million rows returned, but
> the query itself runs quite fast.  It seems to just be
>   // sucking it into C that can take up to four seconds on our
> dual Xeon server.
>   sprintf(sqlBuff,"SELECT A.* FROM `attribs` as A, login AS L
> WHERE A.guid=L.guid AND L.isActive=1 AND L.sid=%d AND
> A.guid!=%d",sid,uid);
>   if (mysql_real_query(&dbase,sqlBuff,strlen(sqlBuff))) {
>   printf("Pool Attributes Select Failed... dumbass\n");
>   fprintf(stderr, "Error: %s\n",
>   mysql_error(&dbase));
>   exit(1);
>   }
>
>   result = mysql_store_result(&dbase);
>   numRows=mysql_num_rows(result);
>   for (i=0;i   row = mysql_fetch_row(result);
>   tempq=atoi(row[1]);
>   tempP=atoi(row[0]);
>   genAttrib[tempP][tempq]=atoi(row[2]);
>   }
>
> return 0;
> }
>
> So, if someone sees something that I could change to speed things up, or
> I should direct this question elsewhere... thanks for your help and
> thanks for reading this far!
>
> Thanks again,
> Matt



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Help

2004-06-11 Thread Dobromir Velev
Hi,

You could use either something like this
SELECT  gallery_id, gallery_name
FROM galleries g 
WHERE   keywords rlike '(^|,)$keyword_id(,|$)';

or 

SELECT  gallery_id, gallery_name
FROM galleries g 
WHERE   $keyword_id in (keywords);


and replace the $keyword_id with the id of the keyword you are looking for. 

Personally I would prefer the first option and put an index on the keywords 
field but you should check for yourself which query will work faster.


-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Friday 11 June 2004 14:22, Andrew Dixon - MSO.net wrote:
> Hi Everyone.
>
> I have the following a table with a varchar column that contains a comma
> delimited list of id's from another table that relates the item keywords in
> the other table.
>
> The table keywords contains
>
> keyword_id (int/auto increment/primary key)
> Keyword (varchar/normal key)
>
> The galleries table contains:
>
> gallery_id (int/auto increment/primary key)
> gallery_name (varchar)
> keywords (varchar)
>
> I didn't design the database and I know it is not a good design, but I'm
> stuck with it and I need a query to get the gallery_id when I have a
> certain keyword_id
>
> For example:
>
> gallery_id | gallery_name | keywords
> 1  | test | 1,2,3,4
> 2  | test2| 3,4,5,6
>
> And I won't to get all the galleries with where the have the keywords 2,
> which in this case would be record 1 or keyword 4 which would be both
> record.
>
> SELECTgallery_id, gallery_name
> FROM  galleries
> WHERE keywords 
>
> Hope that makes sense, thanks in advanced.
>
> Best Regards,
>
> Andrew Dixon.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Binary logfiles eating all my disk space

2004-06-11 Thread Dobromir Velev
Here it is

#!/usr/bin/perl
use strict;
use DBI;
my $db_host="localhost";
my $db_user="username";
my $db_pass="password";
my $db_name="database";
my $mail_prog = '/usr/lib/sendmail';
my $email='[EMAIL PROTECTED]';
my $from_email='[EMAIL PROTECTED]';

sub mysql_die{
if ($_[0]){ print $_[0]."\n";}
else{ print "MySQL Error: ".$DBI::errstr."\n";}
open (MAIL, "|$mail_prog -t");
print MAIL "To: $email\n";
print MAIL "Reply-to: $from_email\n";
print MAIL "From: $from_email\n";
print MAIL "Content-type: text/plain;charset=windows-1251\n";
print MAIL "Subject :DB Clean error - ".localtime()."\n";
print MAIL "\n";
print MAIL "MySQL Error: ".$DBI::errstr."\n";
close (MAIL);
exit;
}

my @log;
my $dbh = 
DBI->connect("DBI:mysql:$db_name:$db_host",$db_user,$db_pass,{PrintError => 
0, AutoCommit => 1}) or  die $DBI::errstr;
my $cmycres=$dbh->prepare("show master logs");
$cmycres->execute() or mysql_die();
my $i=7; #this will leave the last seven logs intact
while ($i<=$cmycres->rows){
    @log=$cmycres->fetchrow;
$i++;
}
$cmycres->finish();

print localtime()." purging logs to $log[0]\n";
$dbh->do("purge master logs to '$log[0]'") or mysql_die();
$dbh->disconnect;
print "Logs purge end: ".localtime()."\n";

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Friday 11 June 2004 00:55, Jeff Smelser wrote:
> On Thursday 10 June 2004 08:03 am, Josh Trutwin wrote:
> > Would you be willing to share your perl script?  Perhaps offlist if you
> > are concerned about everyone seeing it?
>
> I would like to see it as well.. If at all possible
>
> --
> How can there be self-help groups?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Binary logfiles eating all my disk space

2004-06-11 Thread Dobromir Velev
Hi,
There is a thing I forgot to mention in my previous email - if you are 
replicating your database please follow the steps described in 

http://dev.mysql.com/doc/mysql/en/PURGE_MASTER_LOGS.html

or you might end with missing data on your slave servers. A binary log should 
not be deleted unless all slaves have already processed it. 

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Friday 11 June 2004 00:55, Jeff Smelser wrote:
> On Thursday 10 June 2004 08:03 am, Josh Trutwin wrote:
> > Would you be willing to share your perl script?  Perhaps offlist if you
> > are concerned about everyone seeing it?
>
> I would like to see it as well.. If at all possible
>
> --
> How can there be self-help groups?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Binary logfiles eating all my disk space

2004-06-10 Thread Dobromir Velev
I've made a simple perl script that uses the 

show master logs 

and

purge master logs to 'mysql.???'

queries to remove all but the last seven logs and it works perfectly for me.

Check http://dev.mysql.com/doc/mysql/en/PURGE_MASTER_LOGS.html for details.

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


On Thursday 10 June 2004 10:05, Søren Neigaard wrote:
> Hi
>
> I have a MySQL (mysql-standard-4.0.17-pc-linux-i686) running as master
> where I have attached a slave to for replication. The problem now is that
> on the master there is a LOT of mysql-bin.XXX files, and they each take 1GB
> diskplace.
>
> Are these files needed still, or can i delete the old ones?
>
> Can I get MySQL to cleanup these files automatically?
>
> Med venlig hilsen/Best regards
> Søren Neigaard
> System Architect
> 
> Mobilethink A/S
> Arosgaarden
> Åboulevarden 23, 4.sal
> DK - 8000 Århus C
> Telefon: +45 86207800
> Direct: +45 86207810
> Fax: +45 86207801
> Email: [EMAIL PROTECTED]
> Web: www.mobilethink.dk
> 
>
> ---
> Outgoing mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.701 / Virus Database: 458 - Release Date: 07-06-2004



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Average Time per query.

2004-06-08 Thread Dobromir Velev
Check out the BENCHMARK function

http://dev.mysql.com/doc/mysql/en/Information_functions.html

On Tuesday 08 June 2004 20:22, Jeffrey M. Johnson wrote:
> How do you determine the average time per query in MySQL?
>
> Jeff Johnson

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem with EQ_REF and ALL

2003-12-23 Thread Dobromir Velev
Hi,
Sorry for not responding more promptly.

To remove a index you can use a command like this one

ALTER TABLE tbl_klanten DROP INDEX klant_id;

or using phpMyAdmin - just click on the "delete" link next to the index.

Anyway as I told you before I don't think this is causing the problem - the additional 
index will never be used and it will just take unneccessary space in your index file. 
You should check the SQL query syntax - for example adding brackets might help.


HTH
Dobromir Velev
  - Original Message - 
  From: Koen Van Mulders 
  To: Dobromir Velev 
  Sent: Friday, December 19, 2003 13:52
  Subject: Re: Problem with EQ_REF and ALL


  Hello,

  first of all, tnx for the great help so far !

  I run MySQL 3.24.

  You say I have some duplicate indexes
  I didn't add KEY(klant_id) for instance, phpmyadmin did that :-s

  How do i drop the KEY but keep the PRIMARY KEY.
  I can't find any info on KEY, only on PRIMARY KEY...

  What did you change on the create table syntax ?
  And how do I make your changes but on the already existing table ?

  Tnx in advance !
- Original Message - 
From: Dobromir Velev 
To: [EMAIL PROTECTED] 
Sent: Friday, December 19, 2003 11:42 AM
Subject: Re: Problem with EQ_REF and ALL


Hi,
I can't seem to find any problems, except for the few duplicate indexes in 
tbl_klanten, tbl_v_levering,  tbl_v_bestelbon - when you have PRIMARY KEY  (klant_id) 
you don need to add  KEY klant_id (klant_id). 

I created the tables  on a 4.0.12 server and the EXPLAIN returned EQ_REF on both 
queries. Here is the EXPLAIN otput

mysql> EXPLAIN SELECT * FROM tbl_v_batch b LEFT JOIN tbl_v_levering l ON 
l.ver_lev_id = b.batch_lever_id LEFT JOIN tbl_klanten k ON k.klant_id = 
l.ver_lev_klant_id;

+---++---+-+-++--+---+
| table | type   | possible_keys | key | key_len | ref| rows | 
Extra |

+---++---+-+-++--+---+
| b | ALL| NULL  | NULL|NULL | NULL   |2 | 
  |
| l | eq_ref | PRIMARY   | PRIMARY |   4 | b.batch_lever_id   |1 | 
  |
| k | eq_ref | PRIMARY   | PRIMARY |   4 | l.ver_lev_klant_id |1 | 
  |

+---++---+-+-++--+---+
3 rows in set (0.00 sec)

mysql> EXPLAIN SELECT * FROM tbl_v_batch b LEFT JOIN tbl_v_bestelbon bb ON 
bb.ver_id = b.batch_bestel_id LEFT JOIN tbl_klanten k1 ON k1.klant_id = 
bb.ver_klant_id;

+---++---+-+-+---+--+---+
| table | type   | possible_keys | key | key_len | ref   | rows | 
Extra |

+---++---+-+-+---+--+---+
| b | ALL| NULL  | NULL|NULL | NULL  |2 |  
 |
| bb| eq_ref | PRIMARY   | PRIMARY |   4 | b.batch_bestel_id |1 |  
 |
| k1| eq_ref | PRIMARY   | PRIMARY |   4 | bb.ver_klant_id   |1 |  
 |

+---++---+-+-+---+--+---+
3 rows in set (0.00 sec)

So the last thing I could think of is putting some brackets to define the order in 
which the joins will be made like

SELECT * FROM (tbl_v_batch b LEFT JOIN tbl_v_levering l ON l.ver_lev_id = 
b.batch_lever_id) LEFT JOIN tbl_klanten k ON k.klant_id = l.ver_lev_klant_id;


HTH
Dobromir Velev
[EMAIL PROTECTED]


  - Original Message - 
  From: Koen Van Mulders 
  To: Dobromir Velev 
  Sent: Thursday, December 18, 2003 17:52
  Subject: Re: Problem with EQ_REF and ALL


  Please excuse me if this comes to your personal mailbox,
  i have no clue on how to use these lists :-)
  I am proud I could post my problem, I don't know how to reply to it :-(.

  Anyway, here are the table structures :

  
---

  #
  # Tabel structuur voor tabel `tbl_klanten`
  #

  CREATE TABLE tbl_klanten (
klant_id int(11) NOT NULL auto_increment,
klant_nummer int(11) NOT NULL default '0',
klant_bedrijfsnaam text NOT NULL,
klant_voornaam tinytext NOT NULL,
klant_achternaam tinytext NOT NULL,
klant_straat_nr text NOT NULL,
klant_postnr text NOT NULL,
klant_stad text NOT NULL,
klant_tel text NOT NULL,
klant_fax text NOT NULL,
klant_gsm text NOT NULL,
klant_email text NOT NULL,
klant_btw text NOT NULL,
klant_hr text NOT NULL,
klant_specialisatie text NOT NULL,

Re: Problem with EQ_REF and ALL

2003-12-19 Thread Dobromir Velev
Hi,
I can't seem to find any problems, except for the few duplicate indexes in 
tbl_klanten, tbl_v_levering,  tbl_v_bestelbon - when you have PRIMARY KEY  (klant_id) 
you don need to add  KEY klant_id (klant_id). 

I created the tables  on a 4.0.12 server and the EXPLAIN returned EQ_REF on both 
queries. Here is the EXPLAIN otput

mysql> EXPLAIN SELECT * FROM tbl_v_batch b LEFT JOIN tbl_v_levering l ON l.ver_lev_id 
= b.batch_lever_id LEFT JOIN tbl_klanten k ON k.klant_id = l.ver_lev_klant_id;
+---++---+-+-++--+---+
| table | type   | possible_keys | key | key_len | ref| rows | 
Extra |
+---++---+-+-++--+---+
| b | ALL| NULL  | NULL|NULL | NULL   |2 | 
  |
| l | eq_ref | PRIMARY   | PRIMARY |   4 | b.batch_lever_id   |1 | 
  |
| k | eq_ref | PRIMARY   | PRIMARY |   4 | l.ver_lev_klant_id |1 | 
  |
+---++---+-+-++--+---+
3 rows in set (0.00 sec)

mysql> EXPLAIN SELECT * FROM tbl_v_batch b LEFT JOIN tbl_v_bestelbon bb ON bb.ver_id = 
b.batch_bestel_id LEFT JOIN tbl_klanten k1 ON k1.klant_id = bb.ver_klant_id;
+---++---+-+-+---+--+---+
| table | type   | possible_keys | key | key_len | ref   | rows | 
Extra |
+---++---+-+-+---+--+---+
| b | ALL| NULL  | NULL|NULL | NULL  |2 |  
 |
| bb| eq_ref | PRIMARY   | PRIMARY |   4 | b.batch_bestel_id |1 |  
 |
| k1| eq_ref | PRIMARY   | PRIMARY |   4 | bb.ver_klant_id   |1 |  
 |
+---++---+-+-+---+--+---+
3 rows in set (0.00 sec)

So the last thing I could think of is putting some brackets to define the order in 
which the joins will be made like

SELECT * FROM (tbl_v_batch b LEFT JOIN tbl_v_levering l ON l.ver_lev_id = 
b.batch_lever_id) LEFT JOIN tbl_klanten k ON k.klant_id = l.ver_lev_klant_id;


HTH
Dobromir Velev
[EMAIL PROTECTED]


  - Original Message - 
  From: Koen Van Mulders 
  To: Dobromir Velev 
  Sent: Thursday, December 18, 2003 17:52
  Subject: Re: Problem with EQ_REF and ALL


  Please excuse me if this comes to your personal mailbox,
  i have no clue on how to use these lists :-)
  I am proud I could post my problem, I don't know how to reply to it :-(.

  Anyway, here are the table structures :

  
---

  #
  # Tabel structuur voor tabel `tbl_klanten`
  #

  CREATE TABLE tbl_klanten (
klant_id int(11) NOT NULL auto_increment,
klant_nummer int(11) NOT NULL default '0',
klant_bedrijfsnaam text NOT NULL,
klant_voornaam tinytext NOT NULL,
klant_achternaam tinytext NOT NULL,
klant_straat_nr text NOT NULL,
klant_postnr text NOT NULL,
klant_stad text NOT NULL,
klant_tel text NOT NULL,
klant_fax text NOT NULL,
klant_gsm text NOT NULL,
klant_email text NOT NULL,
klant_btw text NOT NULL,
klant_hr text NOT NULL,
klant_specialisatie text NOT NULL,
klant_q_assistenten smallint(6) NOT NULL default '0',
klant_q_tandartsen smallint(6) NOT NULL default '0',
klant_openingsuren text NOT NULL,
klant_active enum('0','1') NOT NULL default '1',
klant_vertegenwoordiger int(11) NOT NULL default '0',
klant_riziv text NOT NULL,
klant_btw_plichtig enum('1','2') NOT NULL default '1',
klant_type text NOT NULL,
klant_aanspreektitel text NOT NULL,
klant_taal text NOT NULL,
klant_betaaltermijn int(4) NOT NULL default '0',
klant_bank text NOT NULL,
klant_krediet text NOT NULL,
klant_korting text NOT NULL,
klant_creatie_id text NOT NULL,
klant_creatie_datum text NOT NULL,
klant_creatie_ok_id text NOT NULL,
klant_laatste_edit text NOT NULL,
klant_cat_id int(11) NOT NULL default '0',
klant_type_klant enum('KLANT','FILIAAL','BEDRIJF') NOT NULL default 'KLANT',
klant_filiaal varchar(20) NOT NULL default '',
klant_vertegenwoordiger_ok enum('0','1') NOT NULL default '1',
PRIMARY KEY  (klant_id),
KEY klant_id (klant_id)
  ) TYPE=MyISAM;
  # 

  #
  # Tabel structuur voor tabel `tbl_v_batch`
  #

  CREATE TABLE tbl_v_batch (
batch_id int(11) NOT NULL auto_increment,
batch_type enum('B','L','F','C','FL','S')

Re: Problem with EQ_REF and ALL

2003-12-18 Thread Dobromir Velev
Can you please send the tables structure. This happens when the fields you
are using in the JOIN clause are with different types fro example when
trying to join a VARCHAR with INT

Dobromir Velev
[EMAIL PROTECTED]

- Original Message - 
From: "Koen Van Mulders" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, December 18, 2003 16:54
Subject: Problem with EQ_REF and ALL


I think they are all indexed, yes.
I have an index on :

bb.ver_id
b.batch_bestel_id
k.klant_id (k1 is the same)
b.batch_lever_id
l.ver_lev_id

Tnx in advance


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem with EQ_REF and ALL

2003-12-18 Thread Dobromir Velev
Hi,
Have you indexed the fields you're using in  the JOIN clauses?

Dobromir Velev
[EMAIL PROTECTED]


- Original Message - 
From: "Koen Van Mulders" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, December 18, 2003 15:45
Subject: Problem with EQ_REF and ALL


Someone please help me...

I have 2 queries (in the end I end up joining them with a LEFT JOIN, but
they don't "work" properly)

SELECT *
FROM tbl_v_batch b
LEFT JOIN tbl_v_levering l ON l.ver_lev_id = b.batch_lever_id
LEFT JOIN tbl_klanten k ON k.klant_id = l.ver_lev_klant_id

This one gives ALL on tbl_v_levering l

SELECT *
FROM tbl_v_batch b
LEFT JOIN tbl_v_bestelbon bb ON bb.ver_id = b.batch_bestel_id
LEFT JOIN tbl_klanten k1 ON k1.klant_id = bb.ver_klant_id

This one, on the other hand is correct and gives EQ_REF !

How is this possible ?
It is almost exactly the same table structure :s...

Somebody please help..


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: DELETE on a huge table; how long *should* it take?

2003-12-05 Thread Dobromir Velev
Hi,
You can do it in several smaller and faster deletes using the LIMIT option -
for example

DELETE QUICK from table WHERE indexed_row < UNIX_TIMESTAMP()-86400 limit
1;



HTH
Dobromir Velev
[EMAIL PROTECTED]


- Original Message - 
From: "Chris Elsworth" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, December 05, 2003 12:42
Subject: DELETE on a huge table; how long *should* it take?


> Hello,
>
> I have quite a large table, 45 million rows, which has 3 indexes on
> it. The rows are evenly distributed across one particular index, which
> records the time the row was inserted. At any given time there's
> between 20 and 21 days worth of rows, and every night I delete
> anything over 20 days. So I'm deleting about 2.2 million rows, with
> what is basically:
> DELETE QUICK from table WHERE indexed_row < UNIX_TIMESTAMP()-86400
>
> I have PACK_KEYS=1 DELAY_KEY_WRITE=1, and this ia MyISAM table. Now,
> roughly, should this take half an hour or more? It seems very disk
> bound, producing lots of small disk transactions. I wouldn't really
> mind, but the entire table is locked for the process and the site it's
> powering grinds to a halt.
>
> My first thought is to change it to InnoDB and use a transaction so
> the delete can take as long as it wants without interrupting anything
> else. I am however I bit worried about space; the MyISAM files are
> using 5G for data + 763M for index; it's only an 18G drive thus I'm a
> bit worried the InnoDB equivalent is going to be too big.
>
> Any other pointers, speedup tips, ways to avoid this issue entirely?
>
> -- 
> Chris
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: index question

2003-06-13 Thread Dobromir Velev
Hi,
You need to index the column that is used in the where clause
try this
ALTER TABLE tempo_resposta ADD INDEX  idmaquina (idmaquina);

HTH
Dobromir Velev

- Original Message -
From: "Leonardo Rodrigues Magalhães" <[EMAIL PROTECTED]>
To: "MySQL ML" <[EMAIL PROTECTED]>
Sent: Friday, June 13, 2003 17:21
Subject: index question


>
> Hello Guys,
>
> I have the following table:
>
> CREATE TABLE tempo_resposta (
>   id int(11) NOT NULL auto_increment,
>   idmaquina int(11) NOT NULL default '0',
>   tempo int(11) NOT NULL default '0',
>   horario datetime NOT NULL default '-00-00 00:00:00',
>   PRIMARY KEY  (id)
> ) TYPE=MyISAM;
>
> In this table, I'll be running this query:
>
> select tempo,unix_timestamp(now())-unix_timestamp(horario) as diferenca
from
> tempo_resposta where idmaquina=SOMEID order by diferenca desc
>
> Right now, seems table is being completly scanned with this query:
>
> mysql> explain select tempo,unix_timestamp(now())-unix_timestamp(horario)
as
> diferenca from tempo_resposta where idmaquina=23 order by diferenca desc;
>
++--+---+--+-+--+--+
> -+
> | table  | type | possible_keys | key  | key_len | ref  | rows |
> Extra   |
>
++--+---+--+-+--+--+
> -+
> | tempo_resposta | ALL  | NULL  | NULL |NULL | NULL | 9216 |
> Using where; Using filesort |
>
++--+---+--+-+--+--+
> -+
> 1 row in set (0.00 sec)
>
> rows=9216, exactly all rows in the table
>
>
> Question: is there a way of creating an index for helping that kind of
> query ? I've tried creating index on horario, but it doesnt helped.
>
>
> Sincerily,
> Leonardo Rodrigues
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql under unix in batch mode

2003-03-26 Thread Dobromir Velev
Hi,
most probably there should not be a space after -p.
Try this

mysql -h localhost -uroot -pmysql < did.query > did.out

Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

- Original Message -
From: "Marianadin, Didier" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, March 26, 2003 13:12
Subject: mysql under unix in batch mode


> Hi,
>
> I want to execute a simple query in batch mode in a mysql database.
> my OS : Unix (solaris 2.6)
>
> when I run this command :
> mysql -h localhost -u root -p mysql < did.query > did.out
>
> I'm always asked password even if I've already specified it in the
previous
> command line (mysql)
> Enter password:
>
> How to avoid it ?
>
> Didier ([EMAIL PROTECTED]).
>
>
>
>
>
>
>
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sizes

2003-03-26 Thread Dobromir Velev
They both use 1 byte
Check this for more information

 http://www.mysql.com/doc/en/Storage_requirements.html 

Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


- Original Message - 
From: "Mattias Barthel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, March 25, 2003 19:02
Subject: sizes


which one is the biggest datatype:
enum('true', 'false');OR
tinyint(1);

I mena biggest when it comes to store them on disk.
Thank you,
Mattias.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: embedded group function

2003-03-25 Thread Dobromir Velev
Hi,
you can use something like this

select count(*) as co
from ...
group by ...
order by co DESC limit 0,1

HTH
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


- Original Message -
From: "Hu Qinan" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, March 25, 2003 16:03
Subject: embedded group function


> All,
>
> I tried to use
> SELECT MAX(COUNT(*))
> FROM ...
> GROUP BY ...
>
> in MySQL. But an error occured: Invalid use of group function.
>
> Is it necessary to create a temporary table to store the COUNT(*) results,
and then get the MAX from it?
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Deleting a large group of numbered records

2003-03-25 Thread Dobromir Velev
Hi,

I would use the following sql

delete from table where id in  (value1,value2, ...)

But if the list is to big you might want to inrease the "max_allowed_packet"
configuration variable

Also you can write a simple script to read the values from the file. Here is
an example in PHP

$fp=fopen("filename","r");
if($fp){
 while (!feof($fp)){
  $line=fgets($fp,4096);
  $todelete=$line.",";
 }
 fclose($fp);
}
if ($todelete){
mysql_query("delete from table where id in
(".substr($todelete,0,-1).")",$db);
}

or if you want to delete the records line by line:

$fp=fopen("filename","r");
if($fp){
 while (!feof($fp)){
  $line=fgets($fp,4096);
    mysql_query("delete from table where id = $line",$db);
 }
 fclose($fp);
}

HTH
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


- Original Message -
From: "Scott Haneda" <[EMAIL PROTECTED]>
To: "MySql" <[EMAIL PROTECTED]>
Sent: Tuesday, March 25, 2003 09:57
Subject: Deleting a large group of numbered records


> I have a huge text file, \r seperated, for example:
> 10065
> 10066
> 10067
> 10069
> 10070
> 10075
> 10091
> 10093
> 10094
> 10100
> 10103
> 10104
> 10107
> 10108
> 10113
> 10114
> 10115
> 10116
> 10117
> 10118
> 10119
> 10120
> 10121
> 10122
> 10123
> 10124
> However, there are about 5000 numbers in it, I need a way to delete from
> table where old_id = one of the above numbers and then repeat on, is there
a
> easy way to construct this statement, perhaps reading the data from a
> infile.
>
> Mysql
>
> -
> Scott HanedaTel: 415.898.2602
> http://www.newgeo.com   Fax: 313.557.5052
> [EMAIL PROTECTED]Novato, CA U.S.A.
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sql_big_selects

2003-02-08 Thread Dobromir Velev
Hi,
Isn't there a way to reduce the number of rows that are scanned based on an
index - for example
the query

SELECT * FROM email WHERE id>100 AND subject = 'RE: TEST EMAIL' ORDER BY
`id` DESC LIMIT 0,25;

will be significantly faster (it will scan less rows) and will most probably
return the same results. Using the EXPLAIN command will give you an estimate
of how many rows the query will scan before returning a result.
Check http://www.mysql.com/doc/en/EXPLAIN.html for more iinformation

If you plan to run this query often you should consider indexing the subject
field or every time you'll have to wait long time before the query executes.

Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/



- Original Message -
From: "Dallas Engelken" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, February 05, 2003 17:30
Subject: sql_big_selects


i have a datbase of about 1.5 million log entries that i want to search
based on subject (not indexed because it's seldom used).  when i try, i get
a fatal warning that i cant seem to kick.  i have rtfm, and changed
max_join_size=200 in my.cnf to hopefully make it run without the fatal
warning... that didnt work.

if i set sql_big_selects=1 and run the query, it takes 10 minutes to return
any data.

mysql> SELECT * FROM email WHERE 1 AND subject = 'RE: TEST EMAIL' ORDER BY
`id` DESC LIMIT 0,25;
ERROR 1104: The SELECT would examine too many records and probably take a
very long time. Check your WHERE and use SET OPTION SQL_BIG_SELECTS=1 if the
SELECT is ok

can someone offer some advice to help make this query work.  btw, the order
by `id` is the primary key.

thanks,
dallas

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Regular Expressions

2003-02-08 Thread Dobromir Velev
Hi,
I use it in queries like

SELECT * FROM table WHERE field REGEXP "patern";

It is the same as using the LIKE operator wit more complex patterns

Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com




- Original Message -
From: "Darren Young" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, February 06, 2003 00:02
Subject: Regular Expressions


> I've looked through the mysql manual for information on regualar
> expressions, and all that it seems to have are references such as:
>
> SELECT "fo\nfo" REGEXP "^fo$";
>
> How can the REGEXP be applied to a 'SELECT field FROM table' kind of
> statement?
>
> Thanks,
>
> Darren Young
>
> mysql,query
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: SELECT with DATE

2003-02-07 Thread Dobromir Velev
Hi
you can use

SELECT * FROM table WHERE expire>NOW();

or

SELECT * FROM table WHERE to_days(expire)>to_days(NOW());

HTH
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

- Original Message -
From: "Stephen of Blank Canvas" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, February 05, 2003 18:18
Subject: SELECT with DATE


> Can anyone point me in the right direction for finding a method of only
> creating a query that selects records from a dB where today's date is
> less than the column 'expire', which is in DATE format.  The purpose of
> this is to automatically stop display special offers.
>
> Thanks
>
> Stephen
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Problem with a query - help!!?

2003-02-04 Thread Dobromir Velev
HI,
To test for records that are not present in a table you should use

journoNatOpt.journoid is NULL

instead of

journoNatOpt.journoid = ''

HTH
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

- Original Message -
From: "David Phipps" <[EMAIL PROTECTED]>
To: "MySQL List" <[EMAIL PROTECTED]>
Sent: Monday, February 03, 2003 15:46
Subject: Problem with a query - help!!?


> Hi,
>
> I am new to this list so I apologise if this is not the place to post this
> message.
>
> I am in the final stages of completing a project and I have one query that
> is causing problems.
>
> I need the query to get a story or stories from the story table and
> journalists from the journalist table where their respective id is not
> present in a table which links the journoid to a national option table and
> the storyid to the national option table.
>
> So far I have:
>
> SELECT storySub.storyid, storySub.headline, journoDetails.journoid,
> journoDetails.journoName, journoNatOpt.journoid AS natJid
> FROM releaseManager, journoDetails LEFT JOIN journoNatOpt ON
> journoDetails.journoid = journoNatOpt.journoid, storySub
> WHERE storySub.isNational = 1
> AND journoDetails.isNational = 1
> AND journoDetails.isApproved = 1
> AND journoNatOpt.journoid = ''
> AND releaseManager.storyid = storySub.storyid
> AND releaseManager.isReleased = 1
> AND releaseManager.immediateRelease = 0
> AND releaseManager.releaseTime BETWEEN #CreateODBCTime(Now())# AND
> #CreateODBCTime(DateAdd("n", 30, Now()))#
> AND releaseManager.releaseDate BETWEEN #CreateODBCDateTime(Now())# AND
> #CreateODBCDateTime(DateAdd("h", 24,Now()))#
>
> The above query seems to have trouble when I insert the following: AND
> journoNatOpt.journoid = ''
> If I remove this line then I see all the of national stories and any
> national journalists which is fine except that some of the stories and
> journalists are actually linked to a national option.  What I want to have
> is all stories and journalists that are national but where they are not
> linked to a national option.
>
> If this makes any sense and you can see what I am doing wrong then please
> help!!
>
> Thanks in advance
>
> Dave Phipps
> MySQL new guy!!
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: sql CSV import

2003-01-31 Thread Dobromir Velev
You van use the LOAD DATA INFILE statement

Check this
http://www.mysql.com/doc/en/LOAD_DATA.html

for more information

HTH
Dobromir Velev
[EMAIL PROTECTED]
www.websitepulse.com


- Original Message -
From: "Andrew" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, January 30, 2003 12:48
Subject: RE: sql CSV import


>
> Hi all I am trying to import a csv file from MySQL and keep getting an
error on
> line one.
> Is there an alternative way to import a csv file other than using
phpmyadmin?
>
> I exported the file from MySQL but can't import it back in, any ideas?
>
> Andrew
>
>
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Help with SELECT, JOIN and WHERE query

2003-01-28 Thread Dobromir Velev
Hi,
a query like this should do the job

select user_profile.user_id from user_profile left join team_member on
(team_member.user_id=user_profile.user.id and team_member.team_id=2) where
team_member.user_id is NULL;


Dobromir Velev
[EMAIL PROTECTED]
www.websitepulse.com

- Original Message -
From: "heiko mundle" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, January 28, 2003 16:08
Subject: Help with SELECT, JOIN and WHERE query


> Hi,
>
> I got a difficult problem with a SELECT query:
>
> In my application there are two tables, one for all users (user_profile)
one
> for a user-in-team relation.
>
> I want to know, which user is still not in a speciffic team. Users can be
in
> more than one team, but we don't care about that.
>
> Example:
> user_profile:
> +-+
> | user_id |
> +-+
> |1000 |
> |1001 |
> |1002 |
> |1003 |
> |1004 |
> |1005 |
> |1006 |
> |...  |
> +-+
>
> team_member:
> +-+-+
> | team_id | user_id |
> +-+-+
> |   1 |1000 |
> |   1 |1004 |
> |   1 |1005 |
> |   2 |1006 |
> |   2 |1003 |
> |   2 |1000 |
> |   2 |1001 |
> |   2 |1005 |
> |   3 |1001 |
> |   3 |1005 |
> |   3 |1002 |
> |   3 |1003 |
> | ... |...  |
> +-+-+
>
> Who is not in team no. 2?
>
> Result:
> +-+
> |1002 |
> |1004 |
> +-+
>
> Thanks for any hint
>
> Kind regards Heiko Mundle
>
> --
> +++ GMX - Mail, Messaging & more  http://www.gmx.net +++
> NEU: Mit GMX ins Internet. Rund um die Uhr für 1 ct/ Min. surfen!
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Substraction

2003-01-20 Thread Dobromir Velev
Hi,
There are several ways to solve the problem -
1. Check if you really need an unsigned column  - if not just alter the
column to signed
2. A simple workaround for substracting unsigned values is to add a floating
point number to the operation - then all values will be converted to float
and the negative value will be displayed. For example this should give you
what you need

select 15.0 - id from test;

3. Use CAST function. Example:

select cast(15-id as signed) from test;

HTH
Dobromir Velev
[EMAIL PROTECTED]



- Original Message -
From: "Octavian Rasnita" <[EMAIL PROTECTED]>
To: "Stefan Hinz, iConnect (Berlin)" <[EMAIL PROTECTED]>; "Brian
Lindner" <[EMAIL PROTECTED]>
Cc: "MySQL" <[EMAIL PROTECTED]>
Sent: Monday, January 20, 2003 08:22
Subject: Re: Substraction


> But I don't want to perform a 15 - id, meaning 15 - 10.
> I want to calculate 5 - id, meaning 5 - 10.
>
> It should give me -5 or 5 but not | 18446744073709551611 |
>
>
> Teddy,
> Teddy's Center: http://teddy.fcc.ro/
> Email: [EMAIL PROTECTED]
>
> - Original Message -
> From: "Stefan Hinz, iConnect (Berlin)" <[EMAIL PROTECTED]>
> To: "Brian Lindner" <[EMAIL PROTECTED]>; "Octavian Rasnita"
> <[EMAIL PROTECTED]>
> Cc: "MySQL" <[EMAIL PROTECTED]>
> Sent: Monday, January 20, 2003 1:09 AM
> Subject: Re: Substraction
>
>
> Brian,
>
> > > create table test(id int unsigned);
> > > insert into test values(10);
> > > select 15 - id from test;
> > > The result is 18446744073709551611 instead of -5.
>
> Trying to confirm this with MySQL 4.0.7 on a Win2K box:
>
> mysql> create table test(id int unsigned);
> Query OK, 0 rows affected (0.56 sec)
>
> mysql> insert into test values(10);
> Query OK, 1 row affected (0.08 sec)
>
> mysql> select 15 - id from test;
> +-+
> | 15 - id |
> +-+
> |   5 |
> +-+
> 1 row in set (0.06 sec)
>
> Now, once again with phpMyAdmin 2.3.3pl1:
>
> SELECT 15 - id
> FROM test LIMIT 0, 30
> 15 - id
> 5
>
> But with the following statement (same with phpMyAdmin):
>
> mysql> select 5 - id from test;
> +--+
> | 5 - id   |
> +--+
> | 18446744073709551611 |
> +--+
> 1 row in set (0.00 sec)
>
> Check the "1" in your "15". Maybe there's something wrong.
>
> Regards,
> --
>   Stefan Hinz <[EMAIL PROTECTED]>
>   Geschäftsführer / CEO iConnect GmbH <http://iConnect.de>
>   Heesestr. 6, 12169 Berlin (Germany)
>   Tel: +49 30 7970948-0  Fax: +49 30 7970948-3
>
> - Original Message -
> From: "Brian Lindner" <[EMAIL PROTECTED]>
> To: "Octavian Rasnita" <[EMAIL PROTECTED]>
> Cc: "MySQL" <[EMAIL PROTECTED]>
> Sent: Sunday, January 19, 2003 10:28 PM
> Subject: Re: Substraction
>
>
> > Octavian,
> >
> > Sunday, January 19, 2003, 7:31:57 AM, you wrote:
> > > Hi all,
> >
> > > I've tried the following query:
> >
> > > create table test(id int unsigned);
> > > insert into test values(10);
> > > select 15 - id from test;
> >
> > > The result is 18446744073709551611 instead of -5.
> >
> > I ran this.. and it worked for me
> >
> > MySql 4.0.7 on Linux... ran it through phpMyAdmin
> >
> > > Am I doing something wrong?
> >
> > > Thank you.
> >
> > > Teddy,
> > > Teddy's Center: http://teddy.fcc.ro/
> > > Email: [EMAIL PROTECTED]
> >
> >
> >
> >
> > -
> > > Before posting, please check:
> > >http://www.mysql.com/manual.php   (the manual)
> > >http://lists.mysql.com/   (the list archive)
> >
> > > To request this thread, e-mail <[EMAIL PROTECTED]>
> > > To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> > > Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> >
> >
> >
> >
> >
> > --
> >  Brian
> >  Email: <[EMAIL PROTECTED]>
> >
> >
> >
> >
> > -
> > Before posting, please check:
> >http://www.mysql.com/manual.php   (the manual)
> >http://lists.mysql.com/   (the list archive)
> >
> > To request this thread, e-mail <[EMAIL PROTECTED]>
> > To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> > Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> >
>
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Is it normal?

2003-01-20 Thread Dobromir Velev
Hi,
Please read
http://www.mysql.com/doc/en/Problems_with_float.html

It is a common problem when working with floating point  numbers

Dobromir Velev
[EMAIL PROTECTED]


- Original Message -
From: "Octavian Rasnita" <[EMAIL PROTECTED]>
To: "MySQL" <[EMAIL PROTECTED]>
Sent: Monday, January 20, 2003 10:34
Subject: Is it normal?


> Hi all,
>
> I've tried the following SQL queries:
>
> mysql> create table test(id float);
> Query OK, 0 rows affected (0.00 sec)
>
> mysql> insert into test values(1.123), (3.1495);
> Query OK, 2 rows affected (0.01 sec)
> Records: 2  Duplicates: 0  Warnings: 0
>
> mysql> select * from test where id=1.123;
> Empty set (0.00 sec)
>
> Shouldn't this last query show me the record that has the id=1.123?
>
> What query should I use to do this?
>
> Thank you.
>
>
>
> Teddy,
> Teddy's Center: http://teddy.fcc.ro/
> Email: [EMAIL PROTECTED]
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: RedHat 8, mysql-server-3.23.54a-4 and relocating the data directory

2003-01-20 Thread Dobromir Velev
Hi,
It looks you haven't updated the mysql client configuration. Adding this
lines to your my.cnf file should solve the problem.

[client]
port=3306
socket=/raidarrary2/data/mysql/mysql.sock


HTH
Dobromir Velev
[EMAIL PROTECTED]


- Original Message -
From: "Michael Pelley" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, January 18, 2003 16:26
Subject: RedHat 8, mysql-server-3.23.54a-4 and relocating the data directory


Hi All!  I'm new to the list, but I've been using MySQL under Red Hat for a
couple of years now.

I'm in the process of setting up a new Red Hat 8 box and I want to change
the data directory to a different location (/raidarray2/data/mysql).  I've
edited the /etc/my.cnf as:
[mysqld]
datadir=/raidarray2/data/mysql
socket=/raidarrary2/data/mysql/mysql.sock
[mysql.server]
user=mysql
basedir=/raidarry2/data

I've also checked RedHat's /etc/rc.d/init.d/mysqld file and changed the line
datadir="/raidarry2/data/mysql"
and restarted the server.

Still, I keep getting the error
Can't connect to local MySQL server through socket
'/var/lib/mysql/mysql.sock' (2)

On the command line, I get the same error with
bash_$ mysql -u root -p mysql
unless I modify it as
bash_$ mysql -u root -p -S /raidarry2/data/mysql/mysql.sock mysql

How can I fix this?  Is there some other my.cnf file that is being used in
RedHat 8?  Note that I'm running the version that came with RedHat 8 and has
been patched with the up2date command.

Thanks!

Cheers,
Mike

--
Mike Pelley "Non illegitimati carborundum"
Owner & "Misc. Rambler" of Pelleys.com
[EMAIL PROTECTED] - www.pelleys.com


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Substraction

2003-01-20 Thread Dobromir Velev
Hi,
I have the same problem the last week - I checked the manual and it seems
this is from the new way the unsigned columns are treated in MySQL 4.

Here is a quote from http://www.mysql.com/doc/en/Column_types.html

"Warning: you should be aware that when you use subtraction between integer
values where one is of type UNSIGNED, the result will be unsigned!"

A workaround for this problem which is described in the manual is either to
cast the column to a signed integer or to include a double value (0.0 or
something like this) in the operation.

Check http://www.mysql.com/doc/en/Cast_Functions.html for more information

HTH
Dobromir Velev
[EMAIL PROTECTED]


- Original Message -
From: "Stefan Hinz, iConnect (Berlin)" <[EMAIL PROTECTED]>
To: "Brian Lindner" <[EMAIL PROTECTED]>; "Octavian Rasnita"
<[EMAIL PROTECTED]>
Cc: "MySQL" <[EMAIL PROTECTED]>
Sent: Monday, January 20, 2003 01:09
Subject: Re: Substraction


> Brian,
>
> > > create table test(id int unsigned);
> > > insert into test values(10);
> > > select 15 - id from test;
> > > The result is 18446744073709551611 instead of -5.
>
> Trying to confirm this with MySQL 4.0.7 on a Win2K box:
>
> mysql> create table test(id int unsigned);
> Query OK, 0 rows affected (0.56 sec)
>
> mysql> insert into test values(10);
> Query OK, 1 row affected (0.08 sec)
>
> mysql> select 15 - id from test;
> +-+
> | 15 - id |
> +-+
> |   5 |
> +-+
> 1 row in set (0.06 sec)
>
> Now, once again with phpMyAdmin 2.3.3pl1:
>
> SELECT 15 - id
> FROM test LIMIT 0, 30
> 15 - id
> 5
>
> But with the following statement (same with phpMyAdmin):
>
> mysql> select 5 - id from test;
> +--+
> | 5 - id   |
> +--+
> | 18446744073709551611 |
> +--+
> 1 row in set (0.00 sec)
>
> Check the "1" in your "15". Maybe there's something wrong.
>
> Regards,
> --
>   Stefan Hinz <[EMAIL PROTECTED]>
>   Geschäftsführer / CEO iConnect GmbH <http://iConnect.de>
>   Heesestr. 6, 12169 Berlin (Germany)
>   Tel: +49 30 7970948-0  Fax: +49 30 7970948-3
>
> - Original Message -
> From: "Brian Lindner" <[EMAIL PROTECTED]>
> To: "Octavian Rasnita" <[EMAIL PROTECTED]>
> Cc: "MySQL" <[EMAIL PROTECTED]>
> Sent: Sunday, January 19, 2003 10:28 PM
> Subject: Re: Substraction
>
>
> > Octavian,
> >
> > Sunday, January 19, 2003, 7:31:57 AM, you wrote:
> > > Hi all,
> >
> > > I've tried the following query:
> >
> > > create table test(id int unsigned);
> > > insert into test values(10);
> > > select 15 - id from test;
> >
> > > The result is 18446744073709551611 instead of -5.
> >
> > I ran this.. and it worked for me
> >
> > MySql 4.0.7 on Linux... ran it through phpMyAdmin
> >
> > > Am I doing something wrong?
> >
> > > Thank you.
> >
> > > Teddy,
> > > Teddy's Center: http://teddy.fcc.ro/
> > > Email: [EMAIL PROTECTED]
> >
> >
> >
> >
> > -
> > > Before posting, please check:
> > >http://www.mysql.com/manual.php   (the manual)
> > >http://lists.mysql.com/   (the list archive)
> >
> > > To request this thread, e-mail <[EMAIL PROTECTED]>
> > > To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> > > Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> >
> >
> >
> >
> >
> > --
> >  Brian
> >  Email: <[EMAIL PROTECTED]>
> >
> >
> >
> >
> > -
> > Before posting, please check:
> >http://www.mysql.com/manual.php   (the manual)
> >http://lists.mysql.com/   (the list archive)
> >
> > To request this thread, e-mail <[EMAIL PROTECTED]>
> > To unsubscribe, e-mail
> <[EMAIL PROTECTED]>
> > Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> >
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




May be bug in MySQL 4 (addition)

2003-01-14 Thread Dobromir Velev
In addition to the previous I just want to say that the problem is most
likely with the unsigned columns - I altered the integer column to remove
the 'unsigned' property and now it is working properly

Regards
Dobromir Velev
[EMAIL PROTECTED]


- Original Message -----
From: "Dobromir Velev" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, January 14, 2003 21:37
Subject: May be bug in MySQL 4


> Hi,
> There might be a bug in mysql 4 (tested on 4.0.8-gamma and 4.0.9-gamma)
when
> combining simple arithmetic operations and date conversion
>
> I have a table with the following structure
> date_col date -00-00
> int_col tinyint(3) unsigned
>
> This is what happens when I try to run a query like the one below.
>
> mysql> select to_days(Now())-to_days(data_col)-int_col
> from table_name where to_days(Now())-to_days(data_col)
> +---+
> | to_days(Now())-to_days(date_col)-int_col |
> +---+
> |  18446744073709551607 |
> |  18446744073709551609 |
> |  18446744073709551607 |
> |  18446744073709551609 |
> |  18446744073709551607 |
> |      18446744073709551615 |
>
>
> Tested on Linux Redhat 7.3 and on a Slackware 7.0.0
>
> TIA
> Dobromir Velev
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




May be bug in MySQL 4

2003-01-14 Thread Dobromir Velev
Hi,
There might be a bug in mysql 4 (tested on 4.0.8-gamma and 4.0.9-gamma) when
combining simple arithmetic operations and date conversion

I have a table with the following structure
date_col date -00-00
int_col tinyint(3) unsigned

This is what happens when I try to run a query like the one below.

mysql> select to_days(Now())-to_days(data_col)-int_col
from table_name where to_days(Now())-to_days(data_col)http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Tighly packed table

2002-01-25 Thread Dobromir Velev

Hi,
If your column is of type VARCHAR, you want save much space (at least not as
much as DL Neil said). The specifications of tha varchar column type is that
it uses as much bytes as the data in it. Of course this will make your
indexes smaller (if this column is indexed).
A few days before I decided to optimize one of my tables (5 milion rows) and
altered a varchar(250) field to a varchar(100).
The size of the MYD data file changed with less than 1Mb so you see that
there was not much use of doing it.

Dobromir Velev
Software Developer
http://www.websitepulse.com/


-Original Message-
From: DL Neil [mailto:[EMAIL PROTECTED]]
Sent: Saturday, January 26, 2002 5:39 PM
To: Michael Stearne
Cc: Michael Stearne; Roger Karnouk; [EMAIL PROTECTED]
Subject: Re: Tighly packed table


Michael,

Let's round it up to 3 million rows (I'm lazy at math too!)
Let's say you currently allow 15 bytes per name.
Let's say the longest name on file is 12 characters.

The exercise would save 3 bytes/row multiplied by 3M rows, ie 9MB
(yes, let's ignore binary-decimal differences too)

If you had two name fields (first- and family-name).
Woohoo that's a potential saving of 18MB
I'm also generous (to a fault) so round it up to 20MB.

If you go out to buy a small PC HDD today, the smallest catalog product
might be 40GB
(let's assume they quote formatted capacity - they don't, but there am I
being half-full/-baked again)

Thus we have the ability to save 0.0005% against total capacity of a new
drive.
Of course, the designer might have allowed way too much name-space (pun
hah!) or the table may have other
'compressible' columns.
Let's go for a saving of 0.001%

A new drive costs how much?
Your hourly rate is how much?
How long will the job take you?
How many cups of coffee is that?
Can the client carry the cost of all that coffee?
Won't your stomach rebel at the mistreatment?

Mind you, most of the above is made up - I don't have any faults!
Time for me to go refill my glass (with healthy fruit juice)!
=dn

PS after enjoying myself, let me point out that if the 'name' fields are
currently defined as variable length,
this exercise would allow you to make them fixed length. If you can 'wipe
out' all the variable width columns in
the table, performance will improve significantly!


> Hahaha.  This is a static database.  But you are right I don't know
> how much this will actually help.  Hard disk isn't an issue.  It was
> just an experiment...(that I have no time for anyway!)
>
> Thanks,
> Michael
>
>
> On Friday, January 25, 2002, at 06:19 PM, DL Neil wrote:
>
> > ...and because no one has been really cynical...
> >
> > After that query runs, then prepare for a coffee overload whilst you
> > perform the ALTER TABLE, then get ready
> > because if you shorten the field to (say) 12 characters/bytes the
> > very next day, someone with a 13 character
> > name is going to try to register!
> >
> > I'm wondering just how much space this 'little' exercise is going to
> > save, either as a ratio of the size of the
> > db, or as a ratio of HDD size?
> >
> > My glass is half-empty!
> > =dn
> >
> >
> > - Original Message -
> > From: "Michael Stearne" <[EMAIL PROTECTED]>
> > To: "Roger Karnouk" <[EMAIL PROTECTED]>
> > Cc: <[EMAIL PROTECTED]>
> > Sent: 24 January 2002 22:58
> > Subject: Re: Tighly packed table
> >
> >
> >> The problem is, this query really hurts (I don't know if it finishes)
> >> for unindexed field for 2.9 million rows.  But I'm sure it will finish
> >> eventually.
> >>
> >> Michael
> >>
> >> Roger Karnouk wrote:
> >>
> >>> select max(length(firstname)) from TableName;
> >>>
> >>> -Original Message-
> >>> From: Michael Stearne [mailto:[EMAIL PROTECTED]]
> >>> Sent: Thursday, January 24, 2002 4:38 PM
> >>> To: Christopher Thompson
> >>> Cc: [EMAIL PROTECTED]
> >>> Subject: Re: Tighly packed table
> >>>
> >>>
> >>> Christopher Thompson wrote:
> >>>
> >>>> At 04:10 PM 1/24/2002 -0500, Michael Stearne wrote:
> >>>>
> >>>>> We have a somewhat large read-only table (2.9 million recs). I am
> >>>>> wonder if there is a utility that will look at each row of each
> >>>>> columns and come up with a summary of the largest field (in
> >>>>> character
> >>>>> length) for each column.  For example, scan each row's firstname
> >>>>> fi

Table crashing

2002-01-08 Thread Dobromir Velev

Hi,
I've got a table with a 5 milion records and with 10 records adding
every day.
 My problem is that when I try to delete old records from this tables at
some point I receive the following error
ERROR 1034: Incorrect key file for table: 'logs'. Try to repair it.

and the query execution stops. I know my table is OK - I used myisamchk
before trying to delete records and there were no problems

Any ideas?

Dobromir Velev


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Newbie trying to optimizing a join search

2001-12-29 Thread Dobromir Velev

Hi,
using LIMIT can speed up the things significantly. If you know the aproximate number 
of rows in your query just add LIMIT
number_of_rows at the end of your query - it will look like

SELECT products.return
FROM products,prodcat
WHERE products.prodno=prodcat.prodno
 AND products.Store="0001"
 AND products.Class="0002"
 AND prodcat.category="Animal"
 AND prodcat.subcategory="Bear"
ORDER BY products.Title
LIMIT 100


Check http://www.mysql.com/doc/L/I/LIMIT_optimisation.html
for further information on how MySQL deals with LIMIT


Hope this helps

Dobromir Velev
Web Developer
http://www.websitepulse.com/


-Original Message-
From: Charley L. Tiggs <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: Saturday, December 29, 2001 01:04
Subject: Newbie trying to optimizing a join search


>I'm new to mysql and have gotten most things to work satisfactorily thus
>far.  One of the things that is stumping me at the moment is how to
>perform queries using joins and how to optimize them for speed.
>
>the query is structured as follows:
>
>SELECT products.return
>FROM products,prodcat
>WHERE products.prodno=prodcat.prodno
> AND products.Store="0001"
> AND products.Class="0002"
> AND prodcat.category="Animal"
> AND prodcat.subcategory="Bear"
>ORDER BY products.Title
>
>The above query takes approximately 3 seconds to complete (which is
>better than the 27 seconds without indexes, i admit but still a little
>slow).  Is there a way to optimize this kind of search?  Setup of the
>tables used in this particular area are below:
>
>Setup:
> products table with the following fields:
> -ID (primary key)
> -prodno
> -store
> -class
> -title
> -designer
> -sugretail
> -discount price
> -return
>
> prodcat (contains categories associated with each product) with the
>following fields:
> -ID (primary key)
> -prodno
> -category
> -subcategory
>
>All fields except for ID fields and sugretail and discount price fields
>are varchar fields.  Category and subcategory fields are longest at 50
>chars each.
>
>In products, the ID field has it's own index, product number, store,
>class, designer, and title are inside an index called index_product.
>
>In prodcat, product number, category, and subcategory are part of an
>index called index_category.
>
>Thanks in advance for your attention and time.
>
>Charley
>
>
>-
>Before posting, please check:
>   http://www.mysql.com/manual.php   (the manual)
>   http://lists.mysql.com/   (the list archive)
>
>To request this thread, e-mail <[EMAIL PROTECTED]>
>To unsubscribe, e-mail <[EMAIL PROTECTED]>
>Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: query error

2001-12-29 Thread Dobromir Velev

HI,
The 127 error message is "Record-file is crashed" - use
myisamchk -r table_name
to fix it.
For further information on this topic check
http://www.mysql.com/doc/R/e/Repair.html

Hope this helps

Dobromir Velev
Web Developer
http://www.websitepulse.com/

-Original Message-
From: WANG_FAITH??îÈõ-þì?L <[EMAIL PROTECTED]>
To: '[EMAIL PROTECTED]' <[EMAIL PROTECTED]>
Date: Saturday, December 29, 2001 03:25
Subject: query error


>Hi, all
>When I query data from a table ,the following error occurs:
>ERROR 1030: Got error 127 from table handler.
>Would you please tell me the reason to cause the error and how to deal with
>it?
>Thank you!
>
>-
>Before posting, please check:
>   http://www.mysql.com/manual.php   (the manual)
>   http://lists.mysql.com/   (the list archive)
>
>To request this thread, e-mail <[EMAIL PROTECTED]>
>To unsubscribe, e-mail <[EMAIL PROTECTED]>
>Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Syntax error with a JOIN

2001-12-29 Thread Dobromir Velev

Hi,
Try to change the alias - group is a reserved word and it is causing the syntax error

This one should work

SELECT members.name, gr.id, gr.name
FROM members
 LEFT JOIN members AS gr
 ON gr.id = members.group_id
WHERE members.id = 6

Dobromir Velev
Web developer
http://www.websitepulse.com/

-Original Message-
From: Keegan Miller <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: Saturday, December 29, 2001 08:03
Subject: Syntax error with a JOIN


>I'm pulling my hair out about a query that should be simple.  I have a
>table "members" with columns id, group_id, and a few more that aren't
>important here.  group_id refers to another row in the same "members"
>table, but not all members have a group_id.  What I want to do is
>select, for a particular member, the member's name and (if the member is
>in a group) the group's ID and name.
>
>Here's the simplified query:
>
>SELECT members.name, group.id, group.name
>FROM members
> LEFT JOIN members AS group
> ON group.id = members.group_id
>WHERE members.id = 6
>
>
>And here's the error I get:
>
>You have an error in your SQL syntax near 'group ON group.id =
>members.group_id WHERE members.id = 6' at line 3
>
>
>Can anybody explain to me how I can get this to work?  Is there some
>kind of trick to joining a table to itself?
>
>Thanks,
>Keegan Miller
>
>
>
>
>
>-
>Before posting, please check:
>   http://www.mysql.com/manual.php   (the manual)
>   http://lists.mysql.com/   (the list archive)
>
>To request this thread, e-mail <[EMAIL PROTECTED]>
>To unsubscribe, e-mail <[EMAIL PROTECTED]>
>Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php