Troubles with creating indexes on float columns on MyISAM tables in MySQL 5.6.15 and MySQL 5.6.14 running on FreeBSD 8.4

2014-01-29 Thread Mikhail Berman
I got an interesting problem with creation of indexes on MyISAM
tables  in MySQL 5.6.15 and MySQL 5.6.14 running on FreeBSD 8.4 for float
columns - I am not able to create indexes on these columns

Indexes on all other columns work just fine

The problem occur while I was loading data from MySQL dump into a database.

Loads would  fail on the line ENABLE KEYS in a dump with  ERROR 2013
(HY000): Lost connection to MySQL server during query

The problem was recreated in many different scenarios and could be
reconstructed with a simple test:

I have a table:

mysql show create table LEGAL_REGISTRATION_TWO\G;
*** 1. row ***
   Table: LEGAL_REGISTRATION_TWO
Create Table: CREATE TABLE `LEGAL_REGISTRATION_TWO` (
  `legal_registration_key` int(10) unsigned NOT NULL DEFAULT '0',
  `company_fkey` varchar(10) NOT NULL DEFAULT '',
  `law_firm_fkey` varchar(10) NOT NULL DEFAULT '',
  `registrant_is_guarantor` int(1) NOT NULL DEFAULT '0',
  `plan_name` text NOT NULL,
  `copy_sent_to_firm` int(1) NOT NULL DEFAULT '0',
  `copy_sent_to_firm_name_address_text` text NOT NULL,
  `law_firm_opinion` int(1) NOT NULL DEFAULT '0',
  `law_firm_opinion_type` varchar(10) NOT NULL DEFAULT '',
  `law_firm_opinion_text` text NOT NULL,
  `law_firm_opinion_text_url` varchar(200) NOT NULL DEFAULT '',
  `law_firm_relationship` varchar(20) NOT NULL DEFAULT '',
  `legal_fees` float NOT NULL DEFAULT '0',
  `accounting_fees` float(10,2) NOT NULL DEFAULT '0.00',
 I am attempting to
create an index on this field
  `ftp_file_name_fkey` varchar(80) NOT NULL DEFAULT '',
  `form_fkey` varchar(20) NOT NULL DEFAULT '',
  `file_date` varchar(10) NOT NULL DEFAULT '',
  `file_accepted` varchar(20) NOT NULL DEFAULT '',
  `file_size` varchar(10) NOT NULL DEFAULT '',
  `http_file_name_html` varchar(100) NOT NULL DEFAULT '',
  `http_file_name_text` varchar(100) NOT NULL DEFAULT '',
  `qc_check_1` int(1) NOT NULL DEFAULT '0',
  `qc_check_2` int(1) NOT NULL DEFAULT '0',
  `create_date` varchar(10) NOT NULL DEFAULT '',
  `change_date` varchar(10) NOT NULL DEFAULT ''
) ENGINE=MyISAM DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

With a single row in it:

mysql select count(*) from LEGAL_REGISTRATION_TWO;
+--+
| count(*) |
+--+
|1 |
+--+
1 row in set (0.00 sec)

When I attempting to alter the table to create an index on a float column I
get the error:

mysql alter table LEGAL_REGISTRATION_TWO add key test1dx
(`accounting_fees`);
ERROR 2013 (HY000): Lost connection to MySQL server during query
mysql

I have made a number of  changes in /etc/my.cnf trying to resolve this
problem and currently the following entries are in my.cnf:

net_read_timeout=2400
net_write_timeout=2400
big-tables=on
connect_timeout=40
myisam_sort_buffer_size=1073741824

max_allowed_packet = 128M

I am not finding any talk on Internet about this being a problem for
someone else.


Any idea how to solve this problem are greatly appreciated


-- 
Mikhail Berman


Re: Help! The dreaded Incorrect key file for table 'XXXXXXX'; try to repair it error

2012-03-12 Thread Mikhail Berman
Hi Victor,

To answer your question about saving the table.

This URL http://dev.mysql.com/doc/refman/5.6/en/myisam-repair.html - *Stage
3: Difficult repair* directly addresses your concerns.

You also may want to look into different option of REPAIR TABLE command
http://dev.mysql.com/doc/refman/5.6/en/repair-table.html to see if extended
or form only option can be used to restore your table

Regards,

Mikhail Berman

On Sat, Mar 10, 2012 at 12:38 PM, Victor Danilchenko
vic...@askonline.netwrote:

Hi all,

I was upgrading some web software on my server, and its upgrade
 involved upgrading its database. After the upgrade, the following error
 started appearing:

 mysqldump: Got error: 1034: Incorrect key file for table 'notes'; try to
 repair it when using LOCK TABLES

So i tried doing lock and repair:

 mysql LOCK TABLES notes WRITE;
 ERROR 1034 (HY000): Incorrect key file for table 'notes'; try to repair it

 mysql REPAIR TABLE notes\G
 *** 1. row ***
   Table: sugar.notes
  Op: repair
 Msg_type: Error
 Msg_text: Incorrect key file for table 'notes'; try to repair it
 *** 2. row ***
   Table: sugar.notes
  Op: repair
 Msg_type: error
 Msg_text: Corrupt
 2 rows in set (0.00 sec)

So i tried stopping mysql server and running myisamchk:

 # myisamchk --verbose --force--update-state --key_buffer_size=64M
 --sort_buffer_size=64M --read_buffer_size=1M --write_buffer_size=1M
 ~mysql/sugar/notes.MYI

 # myisamchk --verbose notes
 Checking MyISAM file: notes
 Data records:9519   Deleted blocks:   0
 - check file-size
 - check record delete-chain
 No recordlinks
 - check key delete-chain
 block_size 1024:
 block_size 2048:
 block_size 3072:
 block_size 4096:
 - check index reference
 - check data record references index: 1
 - check data record references index: 2
 - check data record references index: 3
 - check data record references index: 4
 - check record links

but the error is still there, even though myisamchk apparently sees
 nothing wrong!

is there any way to save this table? it was a result of an upgrade
 that took about 24 hours to run, so I really don't want to re-do it from
 scratch.

many thanks in advance for any sage advice.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
Mikhail Berman


Re: Import .dbf files

2011-07-19 Thread Mikhail Berman
Hi Andrew,

You might want to look for conversion tools like
http://dbfview.com/how-to-open-dbf.html so you can dump dbase files into
some other type of files readable by MySQL

On Tue, Jul 19, 2011 at 3:52 PM, andrewmchor...@cox.net wrote:

 Hello

 I am about to create a database in mysql. I would like to be able to import
 some dbase3 (.dbf) files into the tables I will be defining. What is the
 easiest way to import the table. Is there software that can be downloaded
 that will allow me to do this?

 Andrew

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mikhail...@gmail.com



Best,

-- 
Mikhail Berman


Re: Information_schema permission error

2010-02-17 Thread Mikhail Berman

On 02/16/2010 17:30, Zakai Kinan wrote:

I am getting this error - mysqldump: Got error: 1044: Access denied for user 
'root'@'localhost' to database 'information_schema' when using LOCK TABLES.  I 
am using 5.1.45.  I don't understand what is causing this problem.  Does anyone 
have a clue?

TIA,


Zak




   

Hi Zak,

information_schema database is not a set of real tables, it is a set of 
views - http://dev.mysql.com/doc/refman/5.1/en/information-schema.html, 
and because of that you cannot backup information_schema tables as there 
are not any.


The error message you are getting is simply confusing and incorrectly 
represents the situation.


Hoping this helps

Mikhail Berman

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: table export in cron

2010-01-06 Thread Mikhail Berman

Hi Machiel,

As an alternative, you might consider use of mysqdump command, 
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html,  in a KSH/BASH 
script running from cron


The script might look like this:

=

!#/bin/ksh

mysqldump --password=yourpassword [more switches needed here] 
your_database your_table  /path/to/output/file




mysqldump command has switches to accomplish fields termination as you 
need, plus it gives opportunity to specify target database via 
--compatible=name 
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_compatibleswitch.


Hoping this helps,

Mikhail Berman

machiel.richards wrote:

Hi all

 


I have a question regarding exporting of tables to a file
from mysql.

 


We need to export tables from mysql to a delimeted file
which will then be imported into another database (oracle).

 


We can do this manually from within mysql using the
following command:

 


select * from table into outfile
'/path/to/output/file' fields terminated by '|';

 


This needs to be configured though to be run in a cron once
every week at a specific time.

 

How can we do this when running in a cron script? 

 


Your assistance is appreciated.

 


Regards

Machiel

 

 



  


Re: help design the table

2009-12-23 Thread Mikhail Berman
Hi Eva,

It seems to me that you might want to have two tables that will describe
data in your database

* - DOMAINS
* - IPS

DOMAINS table should contain two fields:

* `domain_key` - auto-increment
* `domain_name` - varchar(20)


IPS table should contain three fields

domain_key - int(10) - being a foreign key to DOMAINS table
ip_number - varchar(15)
ip_attribute - varchar(10)

Hoping this gives you some ideas how to handle this data.

Regards,

Mikhail Berman


Eva wrote:
 Hello,

 I have a table, which has a column named as domain, each domain has
 some IPs, each IP has two attributes: disabled,noticed.

 For example, the table:


 domainIP
 www.aol.com   64.12.245.203 64.12.244.203 64.12.190.33 64.12.190.1


 But I don't know how to control the IP's attributes if I set this
 table-stru.

 Please help,thanks.

 Eva.

   

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best way to purge a table

2009-11-09 Thread Mikhail Berman
If DELETE FROM [table_name] is not suitable for your task then try - 
TRUNCATE [table_name] : http://dev.mysql.com/doc/refman/5.0/en/truncate.html


Regards,

Mikhail

Jones, Keven wrote:

Hi,

I need to get rid of all data in one table of my database. The table just has 
old
Data that I no longer need. What is the best way to accomplish this?

If I simply drop the table what can I do prior to dropping the table to ensure
I can recreate the table right after I drop it? I still need the table just not
The data in it. I'm running out of space.

Thank you
  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: self-joins in hierarchical queries: optimization problem

2009-10-29 Thread Mikhail Berman

Olga,

Would you post SHOW CREATE TABLE taxonomic_units1\G;?

It should give us more info on the table you are dealing with

Regards,

Mikhail Berman

Olga Lyashevska wrote:

Dear all,

I have a table which contains taxonomic data (species, genera, family, 
order, class) and it is organized as adjacency list model.


mysql select* from taxonomic_units1 limit 5;
+-+---+-+
| tsn | name | parent_tsn | rank_id |
+-+--+--+-+
| 50 | Bacteria | 0 | 10 |
| 51 | Schizomycetes 202421 | 60 |
| 52 | Archangiaceae | 51 | 140 |
| 53 | Pseudomonadale | 51 | 100 |
| 54 | Rhodobacteriineae | 53 | 110 |
+-+-++-+

I am trying to flatten it, so that it can be used in further analysis 
(e.g. in R)
I have been trying to run the following query, and it does what I want 
it to do, but it takes really long time to get it done.
As a matter of fact I was not patient enough to get the whole output 
and instead set LIMIT 10.


SELECT O1.name AS tclass, O2.name AS torder, O4.name AS tfamily, 
O5.name AS tgenus, O6.name AS tspecies

FROM taxonomic_units1 AS O1
LEFT OUTER JOIN
taxonomic_units1 AS O2
ON O1.tsn = O2.parent_tsn
LEFT OUTER JOIN
taxonomic_units1 AS O3
ON O2.tsn = O3.parent_tsn
LEFT OUTER JOIN
taxonomic_units1 AS O4
ON O3.tsn = O4.parent_tsn
LEFT OUTER JOIN
taxonomic_units1 AS O5
ON O4.tsn = O5.parent_tsn
LEFT OUTER JOIN
taxonomic_units1 AS O6
ON O5.tsn = O6.parent_tsn
LIMIT 10;

+---+-+--+--+---+ 


| tclass | torder | tfamily | tgenus | tspecies |
+---+-+--+--+---+ 


| Bacteria | NULL | NULL | NULL | NULL |
| Schizomycetes | Archangiaceae | NULL | NULL | NULL |
| Schizomycetes | Pseudomonadales | NULL | NULL | NULL |
| Schizomycetes | Pseudomonadales | Nitrobacteraceae | Nitrobacter | 
Nitrobacteragilis |
| Schizomycetes | Pseudomonadales | Nitrobacteraceae | Nitrobacter | 
Nitrobacterflavus |
| Schizomycetes | Pseudomonadales | Nitrobacteraceae | Nitrobacter | 
Nitrobacteroligotrophis |
| Schizomycetes | Pseudomonadales | Nitrobacteraceae | Nitrobacter | 
Nitrobacterpolytrophus |
| Schizomycetes | Pseudomonadales | Nitrobacteraceae | Nitrobacter | 
Nitrobacterpunctata |



I have checked this query with EXPLAIN, and it is not using any 
indices, even though column tsn is set as index in original table.


++-+---+--+---+--+-+--++---+ 

| id | select_type | table | type | possible_keys | key | key_len | 
ref | rows | Extra |
++-+---+--+---+--+-+--++---+ 


| 1 | SIMPLE | O1 | ALL | NULL | NULL | NULL | NULL | 483305 | |
| 1 | SIMPLE | O2 | ALL | NULL | NULL | NULL | NULL | 483305 | |
| 1 | SIMPLE | O3 | ALL | NULL | NULL | NULL | NULL | 483305 | |
| 1 | SIMPLE | O4 | ALL | NULL | NULL | NULL | NULL | 483305 | |
| 1 | SIMPLE | O5 | ALL | NULL | NULL | NULL | NULL | 483305 | |
| 1 | SIMPLE | O6 | ALL | NULL | NULL | NULL | NULL | 483305 | |
++-+---+--+---+--+-+--++---+ 


6 rows in set (0.00 sec)


What is wrong with this query? Or is it a problem of all adjacency 
list models?

Is there a way to get columns indexed using self-joins?

Thanks,
Olga



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump warning or actual error?

2009-09-02 Thread Mikhail Berman

Hi Matt,

The error you are getting is very particular to information_schema 
database.


Information_schema does NOT actually have tables, they are views:

|INFORMATION_SCHEMA| is the information database, the place that stores 
information about all the other databases that the MySQL server 
maintains. Inside |INFORMATION_SCHEMA| there are several read-only 
tables. They are actually views, not base tables, so there are no files 
associated with them. 


http://dev.mysql.com/doc/refman/5.1/en/information-schema.html

Therefore mysqldump generates error trying to dump tables that does not 
exist.


Regards,

Mikhail Berman

Matt Neimeyer wrote:

My local windows machine has mysql 5.1.33 installed on it. One of my
Mac OSX dev servers has some 4.1 flavor of MySQL on it.

When I try to do something like the following: mysqldump -h devserver
-u me -p somedb  dump.sql

I get the following:

mysqldump: Error: 'Table 'information_schema.files' doesn't exist'
when trying to dump tablespaces

It looks like it creates the export fine but I've been ssh-ing into
the dev box and doing it locally there just in case

Should I be worried? Is there some option that would supress that
(that i didn't see in mysqldump --help)? Is it truely harmless?

Thanks

Matt

  


MySQL 5.0.41 performance on FreeBSD 7.0-RC1 AMD64

2008-01-21 Thread Mikhail Berman
Hi everyone,

Is anyone has experience running MySQL 5.0.41 on FreeBSD 7.0-RC1 AMD64?

If you do would you be able to comment on MySQL performance, possible
advantages and problems?

Regards,
-- 
Mikhail Berman


Test E-mail

2007-08-06 Thread Mikhail Berman
Test E-mail
 
Mikhail Berman


RE: Getting list of queries run against a database

2007-06-20 Thread Mikhail Berman
Hi Ben,

If you are running MySQL on one of NIX*.

You can use a script similar to the one I wrote to monitor one of our
DBs

==
#!/bin/ksh


while true
do
/bin/date  db_access.report
 
/data/mysql/mysql-standard/reloc/mysql-standard-4.1.10a-sun-solaris2.9-s
parc-64bit/bin/mysqladmin -pX processlist 
/export/home/mikhail/db_access.report
sleep 30
done 
==

Then you can parse resulting db_access.report, or whatever you are going
to call the file, to see what queries are most popular.
The sleep defines how often the picture of activities in DB is taking

Regards,

Mikhail Berman


-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 20, 2007 10:34 AM
To: Ben Edwards
Cc: mysql@lists.mysql.com
Subject: Re: Getting list of queries run against a database

Ben, there's a slow query log feature that may be just what you're
looking
for:
http://dev.mysql.com/doc/refman/4.1/en/slow-query-log.html

There's an analysis script that will show you the most popular slow
queries,
too, '*mysqldumpslow'.

You can take those queries and use the EXPLAIN feature to start
analyzing
how to speed them up.

HTH,
Dan
*
On 6/20/07, Ben Edwards [EMAIL PROTECTED] wrote:

 We are having a problem with out mysql database (4.2) and think we may
 have indexes missing.  What we are trying to do is find out the most
 popular queries that run.  We know there are not may and that they are
 relatively simple.

 Does anyone know of a tool that allows us to see what queries (i.e.
 via a log file) are/have been run against the database.  If it counts
 how may times/how much resources each query uses that would be good.
 The icing on the cake would be a prog that told us what queries were
 doing full table scans and other expensive operations.

 Regards,
 Ben
 --
 Ben Edwards - Bristol, UK
 If you have a problem emailing me use
 http://www.gurtlush.org.uk/profiles.php?uid=4
 (email address this email is sent from may be defunct)

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Design Help Needed

2007-06-14 Thread Mikhail Berman
Hi Sudheer,

First of all there a number of ways to design this database. 

You will need to choose the one that you feel suites your needs best.

Here one possible design.

Because you have different type of users/accounts, it looks like
ACCOUNT_TYPE table is needed

ACCOUNT_TYPE table

Account_type_id - autoincrement, PK
Account_type - varchar(25). ( Values in this field are:
Individual,Business,Partner,Internal)

Because you have user information to store:

USER_INFO table

User_id - autoincrement, PK
Account_type_id - integer, FK (foreign key to ACCOUNT_TYPE)
Fields of personal/Business information to follow - (First, Last Name,
.)

Because you have web site security info to store:

WEB_SITE_SECURITY_INFO table

User_id - integer, FK (foreign key to USER_INFO)
Security_question
Security_question_answer
Fields of security information to follow

You can grow the database as you add module, but as much as you can
foresee and design ahead it would be better




Mikhail Berman

-Original Message-
From: Sudheer Satyanarayana [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 14, 2007 1:50 AM
To: mysql@lists.mysql.com
Subject: Design Help Needed

Hi,

I'm creating an application for my web site. I want help in designing 
database tables. Currently I'm starting with user management system.

The web site would have these types of users

1. Customer account
 1a. Individual account. This user would be an individual with username,

password, billing address, account security question, answer and few 
more fields.
 1b. Business account. Each business account would have many users. 
Currently I have not decided the number of users for this type of 
account. It may be 10 users in the beginning. I want to keep an option 
to increase the number of users for business accounts. The business 
account will have, business name, billing address, account security 
question, answer, and few other business details. Each user within the 
account will have username, password, first name  last name, mobile 
number and other personal details.
2. Partner account. These are similar to 1b business account type.
3. Internal account. These are employee accounts. Each user will have 
username, password, first name, last name, department, phone number and 
few other fields.

Ideally how many tables should I create? What are the types of 
keys(primary and foreign)

Other modules of the application I would be developing in the future 
are, contact management, shopping cart, mailing lists, customer support,

etc.

I have MySQL 4.1 on the server. Hope my question is clear.

PS: I'm new to databases.


Thanks for the help,
Sudheer. S
Binary Vibes



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Access Denied When Trying to Create Database

2007-06-11 Thread Mikhail Berman
Looks like words  privileges on are missing from GRANT statement you
used

Should be grant super privileges on *.* to 'untz'@'localhost'
identified by 'password';
Not grant super *.* to 'untz'@'localhost' identified by 'password';

Regards,

Mikhail Berman

-Original Message-
From: untz [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 11, 2007 4:02 PM
To: Ananda Kumar
Cc: mysql@lists.mysql.com
Subject: Re: Access Denied When Trying to Create Database

Ananda,

Thank you for responding!

I just tried this and got the following:

mysql grant super *.* to 'untz'@'localhost' identified by 'password';
ERROR 1064 (42000): You have an error in your SQL syntax; check the  
manual that corresponds to your MySQL server version for the right  
syntax to use near '*.* to [EMAIL PROTECTED] identified by password' at  
line 1

Can anyone please help me?

I am not a DBA

Kindest regards,

untz

On Jun 11, 2007, at 4:36 AM, Ananda Kumar wrote:

 Is this a user untz going to do even the create procedure, if  
 yes then
 you need to grant super previliege to this user.

 grant super on *.* to 'untz'@'localhost' identified by 'password';

 regards
 anandkl

 On 6/11/07, untz [EMAIL PROTECTED] wrote:

 Baron  Prathima,

 Thank for the information!

 What happened is that I hadn't used MySQL for a long time and looked
 up on the Internet on how to change my root and individual users'
 passwords.

 The last command the URL had me type was flush privileges; and once
 I did that, I think it created a problem for everything else.

 Here's what I got when I ran SHOW GRANTS:

 mysql show GRANTS;
 +

 ---
 --+
 | Grants for
 [EMAIL PROTECTED]
  |
 +

 ---
 --+
 | GRANT USAGE ON *.* TO 'untz'@'localhost' IDENTIFIED BY PASSWORD
 '*55C1BF0D7E49AB5343925CDD17F2F5F923B5248C'
 | GRANT ALL PRIVILEGES ON `depot_development`.* TO 'untz'@'localhost'
 | GRANT ALL PRIVILEGES ON `depot_test`.* TO 'untz'@'localhost'
 | GRANT ALL PRIVILEGES ON `depot_production`.* TO 'untz'@'localhost'
 | GRANT ALL PRIVILEGES ON `music_development`.* TO 'untz'@'localhost'
 | GRANT ALL PRIVILEGES ON `music_test`.* TO 'untz'@'localhost'
 | GRANT ALL PRIVILEGES ON `music_production`.* TO 'untz'@'localhost'
 +

 ---
 --+
 7 rows in set (0.00 sec)

 What am I supposed to do? I can not run queries or even create  
 tables...

 I also tried using a MySQL client (CocoaMySQL) and it displayed the
 same error as in the previous e-mail (see below).

 Am really stuck on this so any help, suggestions, etc. would be
 greatly appreciated!

 Sincerely yours,

 untz

 On Jun 10, 2007, at 5:44 AM, Baron Schwartz wrote:

  Hi untz,
 
  untz wrote:
  Hello there,
  I am using MySQL 5 on OS X Tiger...
  After starting the server, I tried to create a sample database and
  this is what what happened:
  $ mysql -u untz -p
  Enter password:
  Welcome to the MySQL monitor.  Commands end with ; or \g.
  Your MySQL connection id is 4 to server version: 5.0.16-standard
  Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
  mysql create database people_development;
  ERROR 1044 (42000): Access denied for user 'untz'@'localhost' to
  database 'people_development'
  mysql
 
  Try running SHOW GRANTS while logged in, and see what privileges
  you have.  You probably need to grant your user some additional
  privileges.
 
  Cheers
  Baron
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?
  [EMAIL PROTECTED]
 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql? 
 [EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Could someone explain

2007-04-20 Thread Mikhail Berman
Dear List,
 
We are running:
 
mysql status
--
mysql  Ver 14.12 Distrib 5.0.27, for unknown-freebsd6.0 (i386) using
readline 5.0

 
Could someone explain the meaning  or give us brief explanation of the
following entries in err file on MySQL server:
 
==
Status information:
 
Current dir: /mysql/mysql_data/data/
Running threads: 1  Stack size: 196608
Current locks:
lock: 0xac9623c:
 
lock: 0xac8da3c:
 
lock: 0xac60a3c:
 
lock: 0xac5aa3c:
 
lock: 0xac54a3c:
 
lock: 0xac47a3c:
 
lock: 0xac4423c:
 
lock: 0xac3d23c:
 

Key caches:
default
Buffer_size: 268435456
Block_size:   1024
Division_limit:100
Age_limit: 300
blocks used:   895
not flushed: 0
w_requests:  3
writes:  1
r_requests: 185177
reads: 895
 

handler status:
read_key:  116
read_next:   98382
read_rnd 0
read_first:  3
write:  96
delete   3
update:  0
 
Table status:
Opened tables: 14
Open tables:8
Open files:21
Open streams:   0
 
Alarm status:
Active alarms:   1
Max used alarms: 1
Next alarm time: 28799
=
 
Best Regards,
 
Mikhail Berman


RE: SQL restore deleted records

2007-04-13 Thread Mikhail Berman
I do not have a Russian keyboard here to respond to you in Russian, but
maybe you could describe a problem in Russian. I am not sure what
happened

Mikhail 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 13, 2007 4:17 AM
To: [EMAIL PROTECTED]
Subject: Re: SQL restore deleted records

 SQL
 Hello, All.

 There is a problem.
 Ones many records from MyISAM table are deleted.
 Nobody change this table after this.
 The records have variable length.

 How can I restore it?
 The table format isn't well described in manual, as for me.
is not it possible to recover data from mysql binary log? (if it exists)
insert statements must be there

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysqldump Files

2007-01-29 Thread Mikhail Berman
Hi David,

Let me point you in a bit different direction. You are already running
replication as it seems from your E-mail

So, why not just run chained replication from second to the third server
and use replicate-do-table = [table_name] in my.cnf of the third
server to limit selection of tables to be used by web server. Or do full
replication to another server from the first one for full backup?

Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 29, 2007 2:33 PM
To: mysql
Subject: Mysqldump Files

Howdy Guys and Gals,

We are acquiring data on background radiation in a master-slave server 
environment (RH9.0, MySQL 4.0.24) at the rate of approximately 19,000 
records per day.  The data are insert-only into about 25 of 31 tables in

the database -- no updates are ever applied to the data. Information
from 
the database is used via select statements for graphical display and 
report generation amongst other uses. 

A PHP backup script using mysqldump runs as a cron job each night from a

third server which also functions as an intranet webserver. After 1 1/2 
years of operation, the mysqldump file of the entire database is roughly

760 MB, and this takes under 2 minutes to create.  Once bzipped and 
tarred, the entire file is 31.7 MB in size, and this part of the backup 
process now takes 46-47 minutes. 

The rate of acquisition of data will be fairly constant, and up to 3
years 
of data will be kept on the live master-slave, so simply doubling all 
these values seems a realistic expectation for a full backup of the 
database after 3 years.  Data older than 3 years would be deleted from
the 
master-slave system.

How long it would be reasonable to keep doing a full dump of the
database 
versus using mysqldump with a where clause, i.e., doing a daily 
incremental backup, say of the last 24 hours. 

Also, what are the key mysqldump and/or server variables to pay
attention 
to in order to process these large, maybe multi-gigabyte dump files?

Thanks,

David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysqldump Files

2007-01-29 Thread Mikhail Berman
Hi David,

Is the space on hard-drive is major concern of yours or abilities to
recover from crash is?

Backups are usually taking to be able to recover from a crash. Which in
its turn means if there is a way to recover faster it is better. Having
slave that is constantly updated gives you very quick way of recovering
if master goes down. Just point you PHP scripts to slave and be happy.

If you need additional back up, do them from full slave. Stop it for a
while, do backups and then restart slave again.

For huge backups in our office we use old, and I mean old, Dells with
huge 500GB drives running one of *Nix's


Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 29, 2007 3:27 PM
To: Mikhail Berman
Cc: mysql
Subject: RE: Mysqldump Files

Hi Mikhail,

I don't think that would save much space, in terms of file size. The 
tables that are actively getting inserts are large and growing larger 
(~750,000 records), and those that have no activity are either currently

empty or have less than a hundred records in them. So just dumping the 
active tables will comprise I'd guess 99% or more of the database size. 

David

--

Hi David,

Let me point you in a bit different direction. You are already running
replication as it seems from your E-mail

So, why not just run chained replication from second to the third server
and use replicate-do-table = [table_name] in my.cnf of the third
server to limit selection of tables to be used by web server. Or do full
replication to another server from the first one for full backup?

Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 29, 2007 2:33 PM
To: mysql
Subject: Mysqldump Files

Howdy Guys and Gals,

We are acquiring data on background radiation in a master-slave server 
environment (RH9.0, MySQL 4.0.24) at the rate of approximately 19,000 
records per day.  The data are insert-only into about 25 of 31 tables in

the database -- no updates are ever applied to the data. Information
from 
the database is used via select statements for graphical display and 
report generation amongst other uses. 

A PHP backup script using mysqldump runs as a cron job each night from a

third server which also functions as an intranet webserver. After 1 1/2 
years of operation, the mysqldump file of the entire database is roughly

760 MB, and this takes under 2 minutes to create.  Once bzipped and 
tarred, the entire file is 31.7 MB in size, and this part of the backup 
process now takes 46-47 minutes. 

The rate of acquisition of data will be fairly constant, and up to 3
years 
of data will be kept on the live master-slave, so simply doubling all 
these values seems a realistic expectation for a full backup of the 
database after 3 years.  Data older than 3 years would be deleted from
the 
master-slave system.

How long it would be reasonable to keep doing a full dump of the
database 
versus using mysqldump with a where clause, i.e., doing a daily 
incremental backup, say of the last 24 hours. 

Also, what are the key mysqldump and/or server variables to pay
attention 
to in order to process these large, maybe multi-gigabyte dump files?

Thanks,

David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication help, please

2007-01-18 Thread Mikhail Berman
Dear List,
 
As recently as last Sunday  January 14, 2007, we have enabled
replication between two servers in our organization.
The master server runs MySQL 4.1.10a, the slave runs 5.0.18.
 
Since then, we have had a number of interruptions in replication when
the slave server stopped replicating for different reasons. 
I was able to fix the problems pointed out by the error log on the slave
server, but I am witnessing strange behavior on the part of the slave. 
Every time, I look up slave status using show slave status, I see the
value of Seconds_Behind_Master getting bigger nor smaller as one would
expect. 
I am pasting actual reports of show slave status at the end of this
E-mail.
 
Could anyone help me to find out why the slave reports such thing, and
how to overcome it.
 
mysql show slave status\G;
*** 1. row ***
 Slave_IO_State: Waiting for master to send event
Master_Host: saruman
Master_User: alatarreplica
Master_Port: 3306
  Connect_Retry: 60
Master_Log_File: SB2000-bin.000139
Read_Master_Log_Pos: 857395571
 Relay_Log_File: alatar-relay-bin.05
  Relay_Log_Pos: 190740012
  Relay_Master_Log_File: SB2000-bin.000139
   Slave_IO_Running: Yes
  Slave_SQL_Running: Yes
Replicate_Do_DB: secdocs
Replicate_Ignore_DB:
 Replicate_Do_Table:
 Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
 Last_Errno: 0
 Last_Error:
   Skip_Counter: 0
Exec_Master_Log_Pos: 190663065
Relay_Log_Space: 858304045
Until_Condition: None
 Until_Log_File:
  Until_Log_Pos: 0
 Master_SSL_Allowed: No
 Master_SSL_CA_File:
 Master_SSL_CA_Path:
Master_SSL_Cert:
  Master_SSL_Cipher:
 Master_SSL_Key:
  Seconds_Behind_Master: 285342
1 row in set (0.00 sec)
 
ERROR:
No query specified
 
mysql show slave status\G;
*** 1. row ***
 Slave_IO_State: Waiting for master to send event
Master_Host: saruman
Master_User: alatarreplica
Master_Port: 3306
  Connect_Retry: 60
Master_Log_File: SB2000-bin.000139
Read_Master_Log_Pos: 857395745
 Relay_Log_File: alatar-relay-bin.05
  Relay_Log_Pos: 190740012
  Relay_Master_Log_File: SB2000-bin.000139
   Slave_IO_Running: Yes
  Slave_SQL_Running: Yes
Replicate_Do_DB: secdocs
Replicate_Ignore_DB:
 Replicate_Do_Table:
 Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
 Last_Errno: 0
 Last_Error:
   Skip_Counter: 0
Exec_Master_Log_Pos: 190663065
Relay_Log_Space: 858304221
Until_Condition: None
 Until_Log_File:
  Until_Log_Pos: 0
 Master_SSL_Allowed: No
 Master_SSL_CA_File:
 Master_SSL_CA_Path:
Master_SSL_Cert:
  Master_SSL_Cipher:
 Master_SSL_Key:
  Seconds_Behind_Master: 285344
1 row in set (0.00 sec)
 
ERROR:
No query specified

 
Your help is greatly appreciated,
 
Mikhail Berman
Ives Group
 


RE: MySQL export to csv

2006-12-29 Thread Mikhail Berman
Hi Alf,


SELECT INTO OUTFILE 'file_name' [export_options] FROM yourtable could
of agreat help in your situation

Regards,

Mikhail Berman


-Original Message-
From: Alf Stockton [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 29, 2006 11:34 AM
To: mysql@lists.mysql.com
Subject: Re: MySQL export to csv

Dwight E Chadbourne wrote:
 If there's only a couple of tables you could just export to CSV per 
 table (phpmyadmin makes this easy).  Simple for the Access user to
import.

Great. In fact there is only one table but I cannot see where in
phpmyadmin the export is done..?

--
Regards,
Alf Stocktonwww.stockton.co.za

All things that are, are with more spirit chased than enjoyed.
-- Shakespeare, Merchant of Venice
My email disclaimer is available at www.stockton.co.za/disclaimer.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: insert into some table show status like 'foo'

2006-12-15 Thread Mikhail Berman
Hi Chris,

If you are running on *Nix you could write a script generally structured
like:
 
do 
- show status | grep 'what_ever_string_you_want_to_see' 
- insert into table 
- sleep [seconds]
done

I am not sure how to do the same in Windows

Regards

Mikhail Berman

-Original Message-
From: Chris Comparini [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 15, 2006 2:12 PM
To: mysql@lists.mysql.com
Subject: insert into some table show status like 'foo'


Hello,

Say I wanted to log some various server status variables to a table.
What I'd like to do, ideally, is something like
this:

insert into SomeLogTable (Threads) show status like 'Threads_running'; 

MySQL does not allow this, of course.  But, is there some other way to
see the Threads_running (or other status variables) such that this
would be possible?

Incidentally, one of the things I'd like to log thus is the slave status
Seconds_Behind_Master.  It's a little disappointing that you cannot do
this:

show slave status like 'Seconds_Behind_Master';

.. but rather have to get the entire slave status back in order to see
this one thing.  Anyway...

If anyone has any ideas on this, I'd love to hear them.

Thanks,
  - Chris

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Copying DB to new structure

2006-12-13 Thread Mikhail Berman
Well,

Maybe 70 pairs of select into outfile - load data infile. At least,
this way you can select only columns you want to be in your new
database.

Best,

Mikhail Berman

-Original Message-
From: Russell Horn [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 13, 2006 12:41 PM
To: mysql@lists.mysql.com
Subject: Copying DB to new structure

We have a moderately sized database, more than 5GB in size, several
million rows and 70 tables.

We're running MySQL 5.22 and the database uses innodb throughout with
multiple foreign keys in use.

During development the structure of several tables has been changed many
times, such that we now have a number of rendundent columns.

We've created a new, empty database with our proposed new structure and
I'm now looking for the most efficient way to get our existing data into
this new structure, dropping any data in columns that no longer exist. 
Can anyone propose a sensible way to go about this?

Because we're using innodb, dropping columns one at a time takes an age
as every index is rebuilt. Just laoding the database from a mysqldump
file takes about five hours so I'm pretty sure we'll want to load data
from our old database into the new db with the new structure - if anyone
can recommend a strategy to do that, or suggest an alternative, I'd be
most appreciative!

Thanks,

Russell

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How many records in table?

2006-12-06 Thread Mikhail Berman
Use SELECT in with count(*)

SELECT count(*) from YOUR_TABLE 


Mikhail Berman

-Original Message-
From: Dotan Cohen [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 06, 2006 3:37 PM
To: MySQL General
Subject: How many records in table?

What's a quick query to determine how many records a given table
contains? I don't think that a SELECT query is appropriate, as I don't
intend on doing anything with the data selected. Note that I'm
interfacing with MySQL via php, if that matters. Thanks.

Dotan Cohen

http://what-is-what.com/what_is/copyleft.html
http://lyricslist.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Any explanation for this, please

2006-11-29 Thread Mikhail Berman
Dear List,
 
Could I get an explanation why 1 fails, but 2 works.
 
1.
 
Update statement preceded with explain, fails:
 
explain update COMPANY_NUMBERS_tmp_Mikhail c join tmp_HEMSCOTT_MKTVALUES
t on c.ticker = t.TickerSymbol and t.InterimEndingDate =
replace(left(c.date_qtr,7),'-','/') set
c.total_shares_outstanding_market_cap = t.MarketValue;
ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version for the right
syntax to use near 'update COMPANY_NUMBERS_tmp_Mikhail c join
tmp_HEMSCOTT_MKTVALUES t on c.ticker =' at line 1
 
2.
 
Actual execution of the update statement goes through, no problem
 
mysql update COMPANY_NUMBERS_tmp_Mikhail c join tmp_HEMSCOTT_MKTVALUES
t on c.ticker = t.TickerSymbol and t.InterimEndingDate =
replace(left(c.date_qtr,7),'-','/') 
set c.total_shares_outstanding_market_cap = t.MarketValue;
Query OK, 157551 rows affected (4 min 22.81 sec)
Rows matched: 162999  Changed: 157551  Warnings: 0

Regards,
 
Mikhail Berman
 


RE: Any explanation for this, please

2006-11-29 Thread Mikhail Berman
Thank you, Sir.

So, there is no way to plan update query?
 
Regards,

Mikhail Berman

-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 29, 2006 2:37 PM
To: Mikhail Berman; mysql@lists.mysql.com
Subject: Re: Any explanation for this, please

At 14:31 -0500 11/29/06, Mikhail Berman wrote:
Dear List,

Could I get an explanation why 1 fails, but 2 works.

Because EXPLAIN is used only with SELECT statements.
Try to use a similar SELECT.



1.

Update statement preceded with explain, fails:

explain update COMPANY_NUMBERS_tmp_Mikhail c join 
tmp_HEMSCOTT_MKTVALUES t on c.ticker = t.TickerSymbol and 
t.InterimEndingDate =
replace(left(c.date_qtr,7),'-','/') set 
c.total_shares_outstanding_market_cap = t.MarketValue; ERROR 1064 
(42000): You have an error in your SQL syntax; check the manual that 
corresponds to your MySQL server version for the right syntax to use 
near 'update COMPANY_NUMBERS_tmp_Mikhail c join tmp_HEMSCOTT_MKTVALUES 
t on c.ticker =' at line 1

2.

Actual execution of the update statement goes through, no problem

mysql update COMPANY_NUMBERS_tmp_Mikhail c join tmp_HEMSCOTT_MKTVALUES
t on c.ticker = t.TickerSymbol and t.InterimEndingDate =
replace(left(c.date_qtr,7),'-','/')
set c.total_shares_outstanding_market_cap = t.MarketValue; Query OK, 
157551 rows affected (4 min 22.81 sec) Rows matched: 162999  Changed: 
157551  Warnings: 0

Regards,

Mikhail Berman



--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Any explanation for this, please

2006-11-29 Thread Mikhail Berman
Thank you 


Mikhail Berman

-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 29, 2006 3:30 PM
To: Mikhail Berman; mysql@lists.mysql.com
Subject: RE: Any explanation for this, please

At 14:41 -0500 11/29/06, Mikhail Berman wrote:
Thank you, Sir.

So, there is no way to plan update query?


Not except in the sense of getting a plan for a similar SELECT
statement.


Regards,

Mikhail Berman

-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 29, 2006 2:37 PM
To: Mikhail Berman; mysql@lists.mysql.com
Subject: Re: Any explanation for this, please

At 14:31 -0500 11/29/06, Mikhail Berman wrote:
Dear List,

Could I get an explanation why 1 fails, but 2 works.

Because EXPLAIN is used only with SELECT statements.
Try to use a similar SELECT.



1.

Update statement preceded with explain, fails:

explain update COMPANY_NUMBERS_tmp_Mikhail c join 
tmp_HEMSCOTT_MKTVALUES t on c.ticker = t.TickerSymbol and 
t.InterimEndingDate =
replace(left(c.date_qtr,7),'-','/') set 
c.total_shares_outstanding_market_cap = t.MarketValue; ERROR 1064
(42000): You have an error in your SQL syntax; check the manual that 
corresponds to your MySQL server version for the right syntax to use 
near 'update COMPANY_NUMBERS_tmp_Mikhail c join tmp_HEMSCOTT_MKTVALUES

t on c.ticker =' at line 1

2.

Actual execution of the update statement goes through, no problem

mysql update COMPANY_NUMBERS_tmp_Mikhail c join 
mysql tmp_HEMSCOTT_MKTVALUES
t on c.ticker = t.TickerSymbol and t.InterimEndingDate =
replace(left(c.date_qtr,7),'-','/')
set c.total_shares_outstanding_market_cap = t.MarketValue; Query OK,
157551 rows affected (4 min 22.81 sec) Rows matched: 162999  Changed:
157551  Warnings: 0

Regards,

Mikhail Berman



--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


-- 
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MyISAM to InnoDB conversion help

2006-11-01 Thread Mikhail Berman
Hi everyone,
 
I am hoping to get help with extremely slow performance of MyISAM to
InnoDB conversion. Or find out if this type of performance is usual
 
I have MyISAM table that contains - 3,299,509 rows and I am trying to
convert it to InnoDB for the use with row-level locking, and I am
getting insertion speed of 0.243 of a record a second while I am doing
INSERT INTO InnoDB_table SELECT * FROM MyISAM_Table.
 
Your help is appreciated.
 
Here is what my environment looks like.
 
Hardware:
 
SunBlade 2000 with 2GB processor connected to StorEdge A5200 with RAID5
on it.
 
OS:
 
[EMAIL PROTECTED]/uname -a
SunOS * 5.9 Generic_118558-19 sun4u sparc SUNW,Sun-Blade-1000

MySQL:
 
mysql status;
--
mysql  Ver 14.12 Distrib 5.0.18, for sun-solaris2.9 (sparc) using
readline 5.0
 
InnoDB tables structure:
 
 DAILY_EDGAR_INNODB |CREATE TABLE `DAILY_EDGAR_INNODB` (
  `ftp_file_name_key` char(80) NOT NULL default '',
  `history_record` int(1) NOT NULL default '0',
  `description` char(100) NOT NULL default '',
  `company_fkey` char(10) NOT NULL default '',
  `company_name` char(100) NOT NULL default '',
  `subject_company_fkey` char(10) NOT NULL default '',
  `filer_description` char(10) NOT NULL default '',
  `form_fkey` char(20) NOT NULL default '',
  `file_accepted` char(20) NOT NULL default '',
  `been_evaluated` char(20) NOT NULL default '',
  `uport_evaluated` int(1) NOT NULL default '0',
  `file_date` char(10) NOT NULL default '',
  `file_size` char(10) NOT NULL default '50 KB',
  `accession_number` char(24) NOT NULL default '',
  `http_file_name_html` char(100) NOT NULL default '',
  `http_file_name_text` char(100) NOT NULL default '',
  `create_date` date NOT NULL default '-00-00',
  `change_date` date NOT NULL default '-00-00',
  PRIMARY KEY  (`ftp_file_name_key`),
  KEY `company_idx` (`company_fkey`),
  KEY `filaccdx` (`file_accepted`),
  KEY `beendx` (`been_evaluated`),
  KEY `fidadx` (`file_date`),
  KEY `upevdx` (`uport_evaluated`),
  KEY `crdadx` (`create_date`),
  KEY `hiredx` (`history_record`),
  KEY `accession_number` (`accession_number`),
  KEY `fofkdx` (`form_fkey`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |

Procedure used to execute conversion:
 
root 27686  0.0  0.2 5840 3224 ?S 14:08:23  0:00 mysql
-pxx xxx -e insert into DAILY_EDGAR_INNODB select * from
DAILY_EDGAR

my.cnf InnoDB section:
 
# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /export/home/mysqldata/ibdata
innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
innodb_log_group_home_dir = /export/home/mysqldata/ibdata
innodb_log_arch_dir = /export/home/mysqldata/ibdata
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 1G
innodb_additional_mem_pool_size = 50M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 100M
#innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
 
Best,
 
Mikhail Berman
 


RE: MyISAM to InnoDB conversion help

2006-11-01 Thread Mikhail Berman
Hi Rolando,

Thank you for your help.  

I am on MySQL 5, and I have tried to do the conversion using ALTER TABLE
command. With the same very slow result.

Do you by any chance have specific suggestions how to tweak variables
related to this?

Here is what I got:

+-+-
-+
| Variable_name   | Value
|
+-+-
-+
| innodb_additional_mem_pool_size | 52428800
|
| innodb_autoextend_increment | 8
|
| innodb_buffer_pool_awe_mem_mb   | 0
|
| innodb_buffer_pool_size | 1073741824
|
| innodb_checksums| ON
|
| innodb_commit_concurrency   | 0
|
| innodb_concurrency_tickets  | 500
|
| innodb_data_file_path   | ibdata1:2000M;ibdata2:10M:autoextend
|
| innodb_data_home_dir| /export/home/mysqldata/ibdata
|
| innodb_doublewrite  | ON
|
| innodb_fast_shutdown| 1
|
| innodb_file_io_threads  | 4
|
| innodb_file_per_table   | OFF
|
| innodb_flush_log_at_trx_commit  | 1
|
| innodb_flush_method |
|
| innodb_force_recovery   | 0
|
| innodb_lock_wait_timeout| 50
|
| innodb_locks_unsafe_for_binlog  | OFF
|
| innodb_log_arch_dir | /export/home/mysqldata/ibdata
|
| innodb_log_archive  | OFF
|
| innodb_log_buffer_size  | 1048576
|
| innodb_log_file_size| 5242880
|
| innodb_log_files_in_group   | 2
|
| innodb_log_group_home_dir   | /export/home/mysqldata/ibdata
|
| innodb_max_dirty_pages_pct  | 90
|
| innodb_max_purge_lag| 0
|
| innodb_mirrored_log_groups  | 1
|
| innodb_open_files   | 300
|
| innodb_support_xa   | ON
|
| innodb_sync_spin_loops  | 20
|
| innodb_table_locks  | ON
|
| innodb_thread_concurrency   | 20
|
| innodb_thread_sleep_delay   | 1
|
+-+-
-+

Best,

Mikhail Berman

-Original Message-
From: Rolando Edwards [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 01, 2006 11:05 AM
To: Mikhail Berman
Cc: mysql@lists.mysql.com
Subject: Re: MyISAM to InnoDB conversion help

If you are do this in MySQL 5, try this:

ALTER TABLE table-name ENGINE = InnoDB;

That's all.

Let MySQL worry about conversion.
You may also want to tweek the innodb
system variables (show variables like 'innodb%) for better InnoDB
performance prior to trying this.

- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 10:31:13 AM GMT-0500 US/Eastern
Subject: MyISAM to InnoDB conversion help

Hi everyone,
 
I am hoping to get help with extremely slow performance of MyISAM to
InnoDB conversion. Or find out if this type of performance is usual
 
I have MyISAM table that contains - 3,299,509 rows and I am trying to
convert it to InnoDB for the use with row-level locking, and I am
getting insertion speed of 0.243 of a record a second while I am doing
INSERT INTO InnoDB_table SELECT * FROM MyISAM_Table.
 
Your help is appreciated.
 
Here is what my environment looks like.
 
Hardware:
 
SunBlade 2000 with 2GB processor connected to StorEdge A5200 with RAID5
on it.
 
OS:
 
[EMAIL PROTECTED]/uname -a
SunOS * 5.9 Generic_118558-19 sun4u sparc SUNW,Sun-Blade-1000

MySQL:
 
mysql status;
--
mysql  Ver 14.12 Distrib 5.0.18, for sun-solaris2.9 (sparc) using
readline 5.0
 
InnoDB tables structure:
 
 DAILY_EDGAR_INNODB |CREATE TABLE `DAILY_EDGAR_INNODB` (
  `ftp_file_name_key` char(80) NOT NULL default '',
  `history_record` int(1) NOT NULL default '0',
  `description` char(100) NOT NULL default '',
  `company_fkey` char(10) NOT NULL default '',
  `company_name` char(100) NOT NULL default '',
  `subject_company_fkey` char(10) NOT NULL default '',
  `filer_description` char(10) NOT NULL default '',
  `form_fkey` char(20) NOT NULL default '',
  `file_accepted` char(20) NOT NULL default '',
  `been_evaluated` char(20) NOT NULL default '',
  `uport_evaluated` int(1) NOT NULL default '0',
  `file_date` char(10) NOT NULL default '',
  `file_size` char(10) NOT NULL default '50 KB',
  `accession_number` char(24) NOT NULL default '',
  `http_file_name_html` char(100) NOT NULL default '',
  `http_file_name_text` char(100) NOT NULL default '',
  `create_date` date NOT NULL default '-00-00',
  `change_date` date NOT NULL default '-00-00',
  PRIMARY KEY  (`ftp_file_name_key`),
  KEY `company_idx` (`company_fkey`),
  KEY `filaccdx` (`file_accepted`),
  KEY `beendx` (`been_evaluated`),
  KEY `fidadx` (`file_date`),
  KEY `upevdx` (`uport_evaluated`),
  KEY `crdadx` (`create_date`),
  KEY `hiredx` (`history_record`),
  KEY `accession_number` (`accession_number`),
  KEY `fofkdx` (`form_fkey`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |

Procedure used to execute conversion:
 
root 27686  0.0  0.2 5840 3224 ?S 14

RE: MyISAM to InnoDB conversion help

2006-11-01 Thread Mikhail Berman
 
Great,

Thank you for your help Rolando,

Mikhail Berman


-Original Message-
From: Rolando Edwards [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 01, 2006 11:41 AM
Cc: mysql@lists.mysql.com; Mikhail Berman
Subject: Re: MyISAM to InnoDB conversion help

I just noticed your innodb_data_file_path You have a shared InnoDB
tablespace That can be murder on a MySQL Server

You may want to separate each InnoDB into a separate file

Here are the steps needed to separate InnoDB tables.

1) Do a mysqldump on your database to mydata.sql.
2) Shutdown MySQL
3) Goto my.cnf and add 'innodb_file_per_table' in the [mysqld] section
4) Change in my.cnf : innodb_data_file_path to ibdata1:10M:autoextend
Note: You may want add this too : bulk_insert_buffer_size = 256M
5) Delete ibdata1, ibdata2, and the ib_logfile* files
6) Restart MySQL (the innodb data files and log will regenerate)
7) Run MySQL using the script mydata.sql

All InnoDB data will be sitting in separate .ibd files in the database
folder. Only the data dictionary info for all InnoDB tables will be
sitting in the ibdata1 file.

Give it a try.

- Original Message -
From: Rolando Edwards [EMAIL PROTECTED]
To: Mikhail Berman [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 11:24:00 AM GMT-0500 US/Eastern
Subject: Re: MyISAM to InnoDB conversion help

Check these variable
bulk_insert_buffer_size (Default usually 8M) innodb_buffer_pool_size
(Default usually 8M)


- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: Rolando Edwards [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 11:13:44 AM GMT-0500 US/Eastern
Subject: RE: MyISAM to InnoDB conversion help

Hi Rolando,

Thank you for your help.  

I am on MySQL 5, and I have tried to do the conversion using ALTER TABLE
command. With the same very slow result.

Do you by any chance have specific suggestions how to tweak variables
related to this?

Here is what I got:

+-+-
-+
| Variable_name   | Value
|
+-+-
-+
| innodb_additional_mem_pool_size | 52428800
|
| innodb_autoextend_increment | 8
|
| innodb_buffer_pool_awe_mem_mb   | 0
|
| innodb_buffer_pool_size | 1073741824
|
| innodb_checksums| ON
|
| innodb_commit_concurrency   | 0
|
| innodb_concurrency_tickets  | 500
|
| innodb_data_file_path   | ibdata1:2000M;ibdata2:10M:autoextend
|
| innodb_data_home_dir| /export/home/mysqldata/ibdata
|
| innodb_doublewrite  | ON
|
| innodb_fast_shutdown| 1
|
| innodb_file_io_threads  | 4
|
| innodb_file_per_table   | OFF
|
| innodb_flush_log_at_trx_commit  | 1
|
| innodb_flush_method |
|
| innodb_force_recovery   | 0
|
| innodb_lock_wait_timeout| 50
|
| innodb_locks_unsafe_for_binlog  | OFF
|
| innodb_log_arch_dir | /export/home/mysqldata/ibdata
|
| innodb_log_archive  | OFF
|
| innodb_log_buffer_size  | 1048576
|
| innodb_log_file_size| 5242880
|
| innodb_log_files_in_group   | 2
|
| innodb_log_group_home_dir   | /export/home/mysqldata/ibdata
|
| innodb_max_dirty_pages_pct  | 90
|
| innodb_max_purge_lag| 0
|
| innodb_mirrored_log_groups  | 1
|
| innodb_open_files   | 300
|
| innodb_support_xa   | ON
|
| innodb_sync_spin_loops  | 20
|
| innodb_table_locks  | ON
|
| innodb_thread_concurrency   | 20
|
| innodb_thread_sleep_delay   | 1
|
+-+-
-+

Best,

Mikhail Berman

-Original Message-
From: Rolando Edwards [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 01, 2006 11:05 AM
To: Mikhail Berman
Cc: mysql@lists.mysql.com
Subject: Re: MyISAM to InnoDB conversion help

If you are do this in MySQL 5, try this:

ALTER TABLE table-name ENGINE = InnoDB;

That's all.

Let MySQL worry about conversion.
You may also want to tweek the innodb
system variables (show variables like 'innodb%) for better InnoDB
performance prior to trying this.

- Original Message -
From: Mikhail Berman [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, November 1, 2006 10:31:13 AM GMT-0500 US/Eastern
Subject: MyISAM to InnoDB conversion help

Hi everyone,
 
I am hoping to get help with extremely slow performance of MyISAM to
InnoDB conversion. Or find out if this type of performance is usual
 
I have MyISAM table that contains - 3,299,509 rows and I am trying to
convert it to InnoDB for the use with row-level locking, and I am
getting insertion speed of 0.243 of a record a second while I am doing
INSERT INTO InnoDB_table SELECT * FROM MyISAM_Table.
 
Your help is appreciated.
 
Here is what my environment looks like.
 
Hardware:
 
SunBlade 2000 with 2GB processor connected

RE: taking MySQL down into admin mode

2006-09-14 Thread Mikhail Berman
Hi Chris,

In my.cnf in the following section, remove comment from skip-networking
statement, and re-start your MySQL

# Don't listen on a TCP/IP port at all. This can be a security
enhancement,
# if all processes that need to connect to mysqld run on the same host.
# All interaction with mysqld must be made via Unix sockets or named
pipes.
# Note that using this option without enabling named pipes on Windows
# (via the enable-named-pipe option) will render mysqld useless!
#
#skip-networking

 This to be uncommented before running administrative task, and
commented back to put MySQL back to live

Regards,

Mikhail Berman

-Original Message-
From: Wagner, Chris (GEAE, CBTS) [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 14, 2006 4:08 PM
To: mysql@lists.mysql.com
Subject: taking MySQL down into admin mode

Greetz.  We have a database that is highly used, around 500 queries/s,
and doing administrative tasks can stuff up the database.  Is there a
way to temporarily prevent client connections from within MySQL?  Like
unix u can drop the runlevel to do administration, is there something
similar in MySQL?  I haven't seen anything promising in the manual. 
We're on 5.0.24.


--
Chris Wagner
CBTS
GE Aircraft Engines
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to make this work ?

2006-09-01 Thread Mikhail Berman
Hi Ravi,

If you are working with one of *NIX, you can try to use

$mysqldump --no-data DB1 tblname | mysql DB2

--no-data switch will dump only data base structure

Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 01, 2006 5:47 AM
To: mysql@lists.mysql.com
Subject: How to make this work ?


Hi All,

  How to make this work CREATE TABLE DB2.tblname LIKE DB1.tblname;

Can we have simultaneous connections with 2 DBs?





Regards,

Ravi K







The information contained in this electronic message and any attachments
to this message are intended for the exclusive use of the addressee(s)
and may contain proprietary, confidential or privileged information. If
you are not the intended recipient, you should not disseminate,
distribute or copy this e-mail. Please notify the sender immediately and
destroy all copies of this message and any attachments.


WARNING: Computer viruses can be transmitted via email. The recipient
should check this email and any attachments for the presence of viruses.
The company accepts no liability for any damage caused by any virus
transmitted by this email.


www.wipro.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Incorrect information in file...

2006-08-31 Thread Mikhail Berman
Hi Duane,

Have you tried to repair test table using REPAIR to see if that
solves your problem?

Regards,

Mikhail Berman

-Original Message-
From: Duane Hill [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 31, 2006 9:49 AM
To: mysql@lists.mysql.com
Subject: Incorrect information in file...


  MySQL v5.0.24 on FreeBSD v6.0-RELEASE

  I  was  attempting  to  do  some  tweaking  in  a my.cnf to increase
  performance  on  a  server here. I had copied the my-huge.cnf config
  file  into /etc and uncommented the section on InnoDB from within. I
  also changed the thread_concurrency option from 8 to 4. That was the
  only  changes  I  made  in  the  configuration file. Upon restarting
  MySQL,  I  could  not select anything from any of the InnoDB tables
  and received this error:

  ERROR 1033 (HY000): Incorrect information in file:
'./testdb/test.frm'
  
  I  did  do  some searching in the list archives but couldn't come up
  with anything concrete.

  For the convenience, here is the options that were uncommented:

  innodb_data_home_dir = /var/db/mysql/
  innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
  innodb_log_group_home_dir = /var/db/mysql/
  innodb_log_arch_dir = /var/db/mysql/
  innodb_buffer_pool_size = 384M
  innodb_additional_mem_pool_size = 20M
  innodb_log_file_size = 100M
  innodb_log_buffer_size = 8M
  innodb_flush_log_at_trx_commit = 1
  innodb_lock_wait_timeout = 50

--
This message was sent using 100% recycled electrons.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Incorrect information in file...

2006-08-31 Thread Mikhail Berman
How much do you care about test table, Duane?

In your own words: I do not have anything really set up yet, so drop
the table or even the whole testdb database and see if that helps

Regards,

Mikhail Berman

-Original Message-
From: Duane Hill [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 31, 2006 10:18 AM
To: Mikhail Berman
Cc: mysql@lists.mysql.com
Subject: Re: Incorrect information in file...

On Thursday, August 31, 2006 at 2:02:42 PM, Mikhail confabulated:

 Hi Duane,

 Have you tried to repair test table using REPAIR to see if that 
 solves your problem?

That didn't seem to work. The result returned was:

  mysql repair table test;
 
+-++--+-
+
  | Table   | Op | Msg_type | Msg_text
|
 
+-++--+-
+
  | testdb.test | repair | error| Incorrect information in file:
'./testdb/test.frm' |
 
+-++--+-
+

I even tried with the USE_FRM option and received the same result.
  
 -Original Message-
 From: Duane Hill [mailto:[EMAIL PROTECTED]
 Sent: Thursday, August 31, 2006 9:49 AM
 To: mysql@lists.mysql.com
 Subject: Incorrect information in file...


   MySQL v5.0.24 on FreeBSD v6.0-RELEASE

   I  was  attempting  to  do  some  tweaking  in  a my.cnf to increase
   performance  on  a  server here. I had copied the my-huge.cnf config
   file  into /etc and uncommented the section on InnoDB from within. I
   also changed the thread_concurrency option from 8 to 4. That was the
   only  changes  I  made  in  the  configuration file. Upon restarting
   MySQL,  I  could  not select anything from any of the InnoDB tables
   and received this error:

   ERROR 1033 (HY000): Incorrect information in file:
 './testdb/test.frm'
   
   I  did  do  some searching in the list archives but couldn't come up
   with anything concrete.

   For the convenience, here is the options that were uncommented:

   innodb_data_home_dir = /var/db/mysql/
   innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
   innodb_log_group_home_dir = /var/db/mysql/
   innodb_log_arch_dir = /var/db/mysql/
   innodb_buffer_pool_size = 384M
   innodb_additional_mem_pool_size = 20M
   innodb_log_file_size = 100M
   innodb_log_buffer_size = 8M
   innodb_flush_log_at_trx_commit = 1
   innodb_lock_wait_timeout = 50

-- 
This message was sent using 100% recycled electrons.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SOS

2006-08-30 Thread Mikhail Berman
Hi Lian,

I am going to take a guess that your full execution line looks as follows:

$mysql -u root -h localhost -p yourpwd

If this is true then MySQL will give you prompt:

$Enter password: yourpwd

And the mysql will return

$Error 1049(42000) Unkown database 'yourpwd'

This happens because, while one can type either -u root or -uroot in case 
of password it must be -pyourpwd no spaces between the switch -p and your 
password.

Regards,

Mikhail Berman

-Original Message-
From: 李彦 [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 29, 2006 5:29 AM
To: mysql
Subject: SOS

Dear Sir:
I have some problems with mysql 5.0 binary source in Linux(RedHat).I'm 
able to startup the mysql process.
But when i type in :mysql -u root -h localhost -p, and then put the 
correct password, I can not enter the database.
I took almost one week to deal with this issue. But 
Can you help me? May be the mysql version problem?



致
礼!


李彦
[EMAIL PROTECTED]
  2006-08-29

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Allow other host

2006-08-24 Thread Mikhail Berman
Hi Andreas,

On the local host using mysql database. Execute the following command at
mysql prompt:

Mysqlgrant all privileges on [database_name].* to 'user'@'remote_host'
identified by 'password'

Please read more about this at

http://dev.mysql.com/doc/refman/4.1/en/grant.html or whatever version of
MySQL you have.

Regards

Mikhail Berman

-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Andreas Moroder
Sent: Thursday, August 24, 2006 9:53 AM
To: mysql@lists.mysql.com
Subject: Allow other host

Hello,

I have a mysql DB running on a Windows XP machine.
I can access it locally via mysql command, I have also a ODBC driver
installed and the test says it works ok.
When I try to connect from a remote host via JDBC I get the error
message that this host is not allowed to access the db.

How can I enable a external host to access this DB ?

It is possible to configure this from command line ?
( winmysqladmin is not working properly on this machine )

Thanks
Andreas


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Table specific privileges (BUMP)

2006-07-13 Thread Mikhail Berman
Hi Scott,

One would think that you should be able to accomplish what you are
looking for by changing rows in table - tables_priv in mysql
database. And using flush privileges when you done.


mysql describe tables_priv;
+-+-
--+--+-+---+
---+
| Field   | Type
| Null | Key | Default   | Extra |
+-+-
--+--+-+---+
---+
| Host| char(60)
|  | PRI |   |   |
| Db  | char(64)
|  | PRI |   |   |
| User| char(16)
|  | PRI |   |   |
| Table_name  | char(64)
|  | PRI |   |   |
| Grantor | char(77)
|  | MUL |   |   |
| Timestamp   | timestamp
| YES  | | CURRENT_TIMESTAMP |   |
| Table_priv  |
set('Select','Insert','Update','Delete','Create','Drop','Grant','Referen
ces','Index','Alter') |  | |   |   |
| Column_priv | set('Select','Insert','Update','References')
|  | |   |   |
+-+-
--+--+-+---+
---+
8 rows in set (0.00 sec)

mysql 

See http://dev.mysql.com/doc/refman/4.1/en/request-access.html

Paragraph begins with words After determining the database-specific
privileges granted by the db and host table entries

Regards,

Mikhail Berman

-Original Message-
From: Scott Haneda [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 13, 2006 4:13 PM
To: MySql
Subject: Table specific privileges (BUMP)

Sorry to push this back out to the list, I am stumped, and the docs are
not leading me to an answer.

One users reply was close, and I had tried it, but it generates an
error, which is also posted in this thread.  Thanks everyone, original
message
follows:

MySQL - 4.0.18-standard-log

How do you revoke all privileges from a user for one table in a
database, and still maintain the existing privileges for the other
tables?

For example, I have these tables:
Email
Logbook
Sales_tax
Sessions
Transactions
Users
Orders_A
Orders_B

Lets say I have two users, user_A and user_B Currently, both users have
select, insert, update, and delete on all tables.
I want to totally block user_A from touching Orders_B and totally block
user_B from touching Orders_A

Knowing how to do this the SQL way would help, ultimately, I have to
show a client how to do this in phpMyAdmin, so if anyone knows how to do
it in there, that would be nice as well.

Thanks.

--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Q2. Is there anything could be done to speed up this query

2006-05-04 Thread Mikhail Berman
 
Thank you Dan,

I will try that.

Mikhail Berman

-Original Message-
From: Dan Nelson [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 03, 2006 5:29 PM
To: Mikhail Berman
Cc: Chris White; mysql@lists.mysql.com
Subject: Re: Q2. Is there anything could be done to speed up this query

In the last episode (May 03), Mikhail Berman said:
 Thank you, Chris
 
 But the table is indexed on the field you are referring to and the 
 other one the query, which is evident from this:
 
KEY `prdadadx` (`price_data_date`),
KEY `prdatidx` (`price_data_ticker`)

These are two separate keys, though, and your query is doing a GROUP BY
across both fields, so neither of those keys would be useful (mysql
would have to do a random record lookup for each row to fetch the other
field).  Try an index on (price_data_ticker, price_data_date).  Since
your query only references those fields, mysql should be able to return
your results just by scanning the index.

-- 
Dan Nelson
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Q1. What would run faster?

2006-05-03 Thread Mikhail Berman
Dear List,
 
I am looking to see what the List thinks about this question.
 
If we to run the same query that needs tmp table to be open to get an
answer.
* on a server with 
* and without an RAID array, the rest of hardware would not change as
much as possible.
 
Where this query would run faster?
 
Regards,
 
Mikhail Berman


RE: Q1. What would run faster?

2006-05-03 Thread Mikhail Berman
Thank you, David,

We are using RAID 5.

But, could I bring a point here. 

A RAID device is usually serves to preserve data, by creating a mirror
copy of files on its hard-drives, devices. If this is true, then for a
large query that requires a large temp file that would exists on its HD
for a long time and in my case it takes over an hour to get the answer
back. Would it not the RAID try to make a copy of the temp file, by
doing so would it not prolong the return of the answer?

Regards,

Mikhail Berman

-Original Message-
From: David Israelsson [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 03, 2006 2:54 PM
To: mysql@lists.mysql.com
Subject: Re: Q1. What would run faster?

Mikhail Berman [EMAIL PROTECTED] writes:

 Dear List,
  
 I am looking to see what the List thinks about this question.
  
 If we to run the same query that needs tmp table to be open to get an 
 answer.
 * on a server with
 * and without an RAID array, the rest of hardware would not change as 
 much as possible.
  
 Where this query would run faster?
  
For disk intense applications, regardless if it's a database or some
other application, a proper RAID setup will of course run faster.  It
also depends on what kind of RAID you are using, and how well the RAID
implementation (typically the RAID controller) works.

/David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Q2. Is there anything could be done to speed up this query

2006-05-03 Thread Mikhail Berman
Dear List,
 
 
I have a table: 
 
CREATE TABLE `TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS` (
  `price_data_ticker` char(8) NOT NULL default '',
  `price_data_date` date NOT NULL default '-00-00',
  `price_data_open` float default NULL,
  `price_data_high` float default NULL,
  `price_data_low` float default NULL,
  `price_data_close` float default NULL,
  `price_data_volume` float default NULL,
  KEY `prdadadx` (`price_data_date`),
  KEY `prdatidx` (`price_data_ticker`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 |

That holds:
 
mysql select count(*) from TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS;
+--+
| count(*) |
+--+
| 19087802 |
+--+
1 row in set (0.00 sec)

I am looking to see if there is something I can do to speed up this
query:
 
select count(price_data_date), price_data_date from
TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS group by price_data_ticker,
price_data_date having count(price_data_date)  1;
 
My explain returns:
 
mysql explain select count(price_data_date), price_data_date from
TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS group by price_data_ticker,
price_data_date having count(price_data_date)  1;
++-+---+--+-
--+--+-+--+--+--
---+
| id | select_type | table | type |
possible_keys | key  | key_len | ref  | rows | Extra
|
++-+---+--+-
--+--+-+--+--+--
---+
|  1 | SIMPLE  | TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS | ALL  |
NULL  | NULL |NULL | NULL | 19087802 | Using temporary;
Using filesort |
++-+---+--+-
--+--+-+--+--+--
---+

This table is intentionally designed without the primary keys, so we can
catch and display duplicates.
 
Regards,
 
 
Mikhail Berman
 


RE: Q2. Is there anything could be done to speed up this query

2006-05-03 Thread Mikhail Berman
Thank you, Chris 

But the table is indexed on the field you are referring to and the other
one the query, which is evident from this:

   KEY `prdadadx` (`price_data_date`),
   KEY `prdatidx` (`price_data_ticker`)

And this:

 ll TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS.*
-rw-rw   1 mysqlmysql610809664 May  1 13:32
TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS.MYD
-rw-rw   1 mysqlmysql223084544 May  1 13:34
TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS.MYI  huge index file
-rw-rw   1 mysqlmysql8902 May  1 09:00
TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS.frm 

Any other ideas, please?

Mikhail Berman

-Original Message-
From: Chris White [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 03, 2006 3:27 PM
To: mysql@lists.mysql.com
Subject: Re: Q2. Is there anything could be done to speed up this query

On Wednesday 03 May 2006 12:16 pm, Mikhail Berman wrote:
 I have a table:

 CREATE TABLE `TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS` (
   `price_data_ticker` char(8) NOT NULL default '',
   `price_data_date` date NOT NULL default '-00-00',
   `price_data_open` float default NULL,
   `price_data_high` float default NULL,
   `price_data_low` float default NULL,
   `price_data_close` float default NULL,
   `price_data_volume` float default NULL,
   KEY `prdadadx` (`price_data_date`),
   KEY `prdatidx` (`price_data_ticker`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1 |

snip

 mysql explain select count(price_data_date), price_data_date from
 TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS group by price_data_ticker, 
 price_data_date having count(price_data_date)  1;

 | id | select_type | table | type
|

 possible_keys | key  | key_len | ref  | rows | Extra


++-+---+--+-
 --+--+-+--+--+
 --+--+-+--+--+--
 ---+

 |  1 | SIMPLE  | TICKER_HISTORY_PRICE_DATA_STAGING_NO_KEYS | ALL
|

 NULL  | NULL |NULL | NULL | 19087802 | Using temporary;
 Using filesort |

++-+---+--+-
 --+--+-+--+--+
 --+--+-+--+--+--
 ---+

Well, one problem is that nothing is being indexed.  I think your best
bet is that if you're using that as a high volume query, to look at
indexing other fields (possibly price_data_date as it seems to be the
main hit for your search).  However, this is really all going to depend
on how the database is interacted with as well.  If this is the only
query on this table, or the only major query, then I'd say look at
indexing price_data_date per what I'm seeing in  your query.

 This table is intentionally designed without the primary keys, so we 
 can catch and display duplicates.

 Regards,


 Mikhail Berman

--
Chris White
PHP Programmer / DB Monkey
Interfuel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Student question answer schema

2006-04-11 Thread Mikhail Berman
Brian,

Sound like you need to normalize your data. Quick review of what you
sent to the list shows that you need 5 tables:

* student table - each student description
* module table - each module description
* question table - each question description
* module-to-question table - what questions belongs to what table.
* student-to-question table - what student answered what question and
was it a right answer.

Unless there are NO questions that DO simultaneously belong to more than
one module, you may not have student-to-module table, because knowing
what question belongs to what module could tell you what student has
taking what module. There might be variations to this dependently on
different set of facts. For example, from your description it is not
clear if you storing data only about tests, as answered questions, or
you storing data that tells you what classes (modules) a student has or
is taking.


Best,


Mikhail Berman

-Original Message-
From: JC [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 11, 2006 11:22 AM
To: Brian Menke
Cc: mysql@lists.mysql.com
Subject: Re: Student question answer schema

On Mon, 10 Apr 2006, Brian Menke wrote:

 Does anyone happen to know where a basic schema for tracking questions

 and answers from tests that a student has completed. I don't know why 
 I am having difficulty with this, but I can't seem to figure out how 
 to set up the tables correctly to store this information.

 The basics

 N number of students
 N number of learning modules
 Each learning module has multiple questions Each question has multiple

 answers.

 I need to figure out the tables to track when a student has taken a 
 module (easy)and which questions they got wrong in each module and 
 then be able to run various kinds of reports on questions that 
 students got wrong in various ways. It seems like this should be 
 simple, but I'm struggling with it. Does anyone know where an example
of this type of schema would be?

 Thanks for your help in advance!

 -Brian




to be efficient, you need to break down into multiple tables. otherwise
u'll end up something like this:

tblID|studentID|moduleID|questionID|answerID|
1|1||xx||
2|1||x2|yy10|
3|1||x3|yy20|
4|3||xx||


you get the idea, a lot of data will be repeated.  not a good idea.

jc

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Any comment to this article from the LIST?

2006-02-15 Thread Mikhail Berman
 Dear List, 
 
Does anyone can offer any comments on:

 
Oracle Gives MySQL a Raspberry for Valentine's Day
 
http://www.eweek.com/article2/0,1895,1926600,00.asp
http://www.eweek.com/article2/0,1895,1926600,00.asp 
 
Specifically on referential integrity issues raised in the article.
 
Regards and thank you
 
Mikhail Berman
 


The CSV Storage Engine question

2006-01-18 Thread Mikhail Berman
Hi everyone,
 
Could you let me know if there is a way to enable CSV storage engine
after MySQL was built.
 
Below is my current configuration and output of show engines
 
 
mysql show engines;
++-+
+
| Engine | Support | Comment
|
++-+
+
| MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great
performance |
| HEAP   | YES | Alias for MEMORY
|
| MEMORY | YES | Hash based, stored in memory, useful for
temporary tables  |
| MERGE  | YES | Collection of identical MyISAM tables
|
| MRG_MYISAM | YES | Alias for MERGE
|
| ISAM   | NO  | Obsolete storage engine, now replaced by MyISAM
|
| MRG_ISAM   | NO  | Obsolete storage engine, now replaced by MERGE
|
| InnoDB | YES | Supports transactions, row-level locking, and
foreign keys |
| INNOBASE   | YES | Alias for INNODB
|
| BDB| NO  | Supports transactions and page-level locking
|
| BERKELEYDB | NO  | Alias for BDB
|
| NDBCLUSTER | NO  | Clustered, fault-tolerant, memory-based tables
|
| NDB| NO  | Alias for NDBCLUSTER
|
| EXAMPLE| NO  | Example storage engine
|
| ARCHIVE| YES | Archive storage engine
|
| CSV| NO  | CSV storage engine
|
++-+
+
16 rows in set (0.00 sec)
 
mysql status
--
mysql  Ver 14.7 Distrib 4.1.10a, for sun-solaris2.9 (sparc)
 
Connection id:  89977
Current database:
Current user:   [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
SSL:Not in use
Current pager:  stdout
Using outfile:  ''
Using delimiter:;
Server version: 4.1.10a-standard-log
Protocol version:   10
Connection: Localhost via UNIX socket
Server characterset:latin1
Db characterset:latin1
Client characterset:latin1
Conn.  characterset:latin1
UNIX socket:/tmp/mysql.sock
Uptime: 1 day 19 hours 3 min 14 sec
 
Threads: 5  Questions: 8858515  Slow queries: 467  Opens: 5574  Flush
tables: 1  Open tables: 411  Queries per second avg: 57.154
--
 
mysql

 
Thank you
 
Mikhail Berman


RE: remotely show databases

2006-01-09 Thread Mikhail Berman
Hi Anthony,

I am not sure if you have an installation of MySQL on your local server.
If you do then you can try to use something like below to execute your
SHOW DATABASES

Local_server[path to your mysql/bin directory]/mysql
--host=your_remote_host --user=your_user --password=your_password -e
SHOW DATABASE

Make sure that [EMAIL PROTECTED] has appropriate rights on
your_remote_host (server)

Best,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Anthony Ettinger
Sent: Sunday, January 08, 2006 4:03 PM
To: mysql@lists.mysql.com
Subject: remotely show databases

I know I can login via ssh and run $mysqlshow

But I would then have to parse the outputted text, is there an easier
way (I'm using Perl locally here).

The pitfall of running it locally is that you DO have to password
protect your database user since it's an outside connection to run SHOW
DATABASES;

I tried $man mysqlshow, but didn't see any easy way of simply returning
a \n seperated list of databases.

Any suggestions?

I also need to do this for postgresql if anyone else knows of a
standalone app that dumps the databases for a specific user.


--
Anthony Ettinger
Signature: http://chovy.dyndns.org/hcard.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: PHP4 or PHP5?

2005-12-12 Thread Mikhail Berman
 
Just to give an example of what Shawn is saying is very TRUE.

My MS-Access databases department-wide, the largest one is - 39 tables.

On MySQL side enterprise-wide database - 340 tables

Best,

Mikhail Berman


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 12, 2005 9:36 AM
To: Charles Walmsley
Cc: [EMAIL PROTECTED]; mysql@lists.mysql.com
Subject: RE: PHP4 or PHP5?

You are most welcome. 

As a comment to Fester: sometimes even 40 tables are not enough for a
single application. Once you start dealing in enterprise-level data
systems, 40 tables is how many you wish you had. I am sure there are
some applictions using several hundred tables out there and doing just
fine. 
You can't judge the size of a database by the number of tables it has so
this may very well be a small database. 

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

[EMAIL PROTECTED] wrote on 12/12/2005 05:30:57 AM:

 The main items to be stored are images and video clips but taking 
 advice from previous emails on this list, we will be holding these 
 outside
MySql so
 the tables are to do with loading these, manipulating them, and 
 keeping details on clients, customers and contacts etc.  None of them 
 will be
large.
 I estimate that if the largest one exceeds 200,000 records we will be 
 millionaires!
 
 As I am not expecting to do anything radical I have started to write 
 the site in PHP5.
 
 Thanks to those who replied to my email
 
 Ch
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]
On
 Behalf Of Peter M. Groen
 Sent: 12 December 2005 00:14
 To: mysql@lists.mysql.com
 Subject: Re: PHP4 or PHP5?
 
 
 On Sunday 11 December 2005 23:51, Charles Walmsley wrote:
  Dear All,
 
 
 
  I do not have much experience with PHP or MySql although I have used
SQL
  quite a lot.  I am going to set up a relatively small MySQL database
 (circa
  40 tables) and we are expecting a hit rate of about 40,000 visitors
per
  annum mostly browsing a relatively low number of pages each.  We 
  plan
to
  launch in March
 
 [ 8 ]--
 
 Ehm.. To be blunt... 40 TABLES??? You call that small? What on earth 
 are
you
 
 going to store.
 
 Fester
 

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SHOW commands.

2005-11-30 Thread Mikhail Berman
Michael,

You have three option with mysqldump command to do what you are looking
for

$mysqldump --add-drop-table db_name table_name - this one will dump
data, create table info and add DROP TABLE IF EXIST on the top of the
dump, so you would be able to re-create original table and its data to a
tee.

$mysqldump --no-create-info db_name table_name - this will dump only
data. nothing of table structure.

$mysqldump  --no-data db_name table_name - this will dump only table
structure, nothing of data will be dumped.

Regards,

Mikhail Berman

-Original Message-
From: Michael Williams [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 29, 2005 11:30 PM
To: mysql@lists.mysql.com
Subject: SHOW commands.

Hi all,

Is there a command similar to SHOW CREATE TABLE. . . that will output
the commands to fully duplicate a table; data and all?  I want to
retrieve the command and write it to a text file.

Basically what I need is a SHOW on CREATE TABLE copy SELECT * FROM
original, but SHOW doesn't seem to work here.  I need a copy of this
command so that I can then replicate that table as often as desired in
the future on whatever system is in place.  I could obviously dump the
entire db, but i only want this on a per table basis, as I deem
necessary, whenever I deem it so.  Any help would be greatly
appreciated.

Thanks in advance!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: A key question

2005-11-18 Thread Mikhail Berman
| ON
|
| system_time_zone| EST
|
| table_cache | 512
|
| table_type  | MyISAM
|
| thread_cache_size   | 8
|
| thread_concurrency  | 8
|
| thread_stack| 196608
|
| time_format | %H:%i:%s
|
| time_zone   | SYSTEM
|
| tmp_table_size  | 33554432
|
| tmpdir  |
|
| transaction_alloc_block_size| 8192
|
| transaction_prealloc_size   | 4096
|
| tx_isolation| REPEATABLE-READ
|
| version | 4.1.10a-standard-log
|
| version_comment | MySQL Community Edition - Standard
(GPL) |
| version_compile_machine | sparc
|
| version_compile_os  | sun-solaris2.9
|
| wait_timeout| 28800
|
+-+-
-+
180 rows in set (0.00 sec)

Mikhail Berman

-Original Message-
From: Jeremy Cole [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 17, 2005 5:23 PM
To: Mikhail Berman
Cc: Jasper Bryant-Greene; mysql@lists.mysql.com
Subject: Re: A key question

Hi Mikhail,

 I may not have been precise in my question, but the Unique Index in 
 question is a two fields index, and I was looking to find out wisdom 
 from the List if there is sense and/or experience in keying 
 second(left) field on in the Unique Index to speed up a search.

If you have a UNIQUE(a, b), then MySQL can use it as an index for (a),
or (a, b), but NOT for (b).  In this context, it won't help generally to
create an index on (a), but it may help to create one on (b) depending
on your queries.

 I am dealing with 32M rows table, where second field in the Unique 
 Index is a date field. Unfortunately for my simple SELECT MAX(Date) 
 as Latest_Date from THE_TABLE took 4 minutes and some seconds, so 
 before I will go and buy bigger server I needed to re-assure myself 
 that there is no other way.

Four minutes to find a MAX(date) is too long for any kind of hardware. 
It should be much faster.  Can you post the output of:

   * SHOW CREATE TABLE tbl
   * SHOW VARIABLES

FYI:

mysql select max(dep_time) from ontime_all;
+-+
| max(dep_time)   |
+-+
| 2005-05-31 23:59:00 |
+-+
1 row in set (49.76 sec)

mysql select count(*) from ontime_all;
+--+
| count(*) |
+--+
| 33395077 |
+--+
1 row in set (0.00 sec)

Could be a lot faster, even, but these are MERGE tables so it's really
65 tables that are being checked...


Regards,

Jeremy

-- 
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: A key question

2005-11-18 Thread Mikhail Berman
   |   |
| cshflst_net_cash_from_operating_activities_qtr|
bigint(20) | YES  | | NULL   |   |
| cshflst_net_cash_from_operating_activities_ttm|
bigint(20) | YES  | | NULL   |   |
| cshflst_net_cash_from_investing_activities_qtr|
bigint(20) | YES  | | NULL   |   |
| cshflst_net_cash_from_investing_activities_ttm|
bigint(20) | YES  | | NULL   |   |
| cshflst_net_cash_from_financing_activities_qtr|
bigint(20) | YES  | | NULL   |   |
| cshflst_net_cash_from_financing_activities_ttm|
bigint(20) | YES  | | NULL   |   |
| cshflst_net_cash_flow_change_in_cash_and_cash_equivalents_qtr |
bigint(20) | YES  | | NULL   |   |
| cshflst_net_cash_flow_change_in_cash_and_cash_equivalents_ttm |
bigint(20) | YES  | | NULL   |   |
+---+---
-+--+-++---+
39 rows in set (0.00 sec)
 
mysql select count(*) from COMPANY_NUMBERS;
+--+
| count(*) |
+--+
|   175102 |
+--+
1 row in set (0.01 sec)
 
mysql describe TICKER_HISTORY_PRICE_DATA;
+---+-+--+-++---+
| Field | Type| Null | Key | Default| Extra |
+---+-+--+-++---+
| price_data_ticker | char(8) |  | PRI ||   |
| price_data_date   | date|  | PRI | -00-00 |   |
| price_data_open   | float   | YES  | | NULL   |   |
| price_data_high   | float   | YES  | | NULL   |   |
| price_data_low| float   | YES  | | NULL   |   |
| price_data_close  | float   | YES  | | NULL   |   |
| price_data_volume | float   | YES  | | NULL   |   |
+---+-+--+-++---+
7 rows in set (0.00 sec)
 
mysql

And thank you again,

Mikhail Berman

-Original Message-
From: Jeremy Cole [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 18, 2005 11:01 AM
To: Mikhail Berman
Cc: Jasper Bryant-Greene; mysql@lists.mysql.com
Subject: Re: A key question

Hi Mikhail,

 Thank you for your help.
 
 I do have an exact situation you have assume I have. Here is the 
 output of SHOW CREATE TABLE
 
   `price_data_ticker` char(8) NOT NULL default '',
   `price_data_date` date NOT NULL default '-00-00',

  ...

   UNIQUE KEY `tidadx` (`price_data_ticker`,`price_data_date`)

  ...

 As you can see, Unique KEY is on two first fields, but most of the 
 work, joins  searches, will be done on the second field
price_data_date.

Could you provide some example queries?

Likely the solution is to create another index on price_data_date, that
could be used for searches by date that do not include ticker.

As I mentioned before, an index on (a, b) can be used for (a) but not
for (b) alone.  However, it usually doesn't make sense to create an
index on (b, a) as well, since if you have both columns in your query,
usually the index on (a, b) would be fine.  So I would suggest adding an
index:

   ALTER TABLE `TICKER_HISTORY_PRICE_DATA`
 ADD INDEX (price_data_date);

Keep in mind that will lock the table to add the index, and may take a
few minutes (although I would expect less than two minutes for 32M rows)
so it might not be a good idea to run while the market is open. :)

If you could provide the exact query you were running, I could confirm
that it would or would not help. :)

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: A key question

2005-11-18 Thread Mikhail Berman
Michael,

Thank you for your comments.

This give me a new ideas how to work with this issues.

And, no at this point we are not planning to work with price_data_ticker
field itself.

Regards,

Mikhail Berman

-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 18, 2005 12:11 PM
To: Mikhail Berman
Cc: Jeremy Cole; Jasper Bryant-Greene; mysql@lists.mysql.com
Subject: Re: A key question

Mikhail Berman wrote:
 Dear Jeremy,
 
 Thank you for your help.
 
 I do have an exact situation you have assume I have. Here is the 
 output of SHOW CREATE TABLE
 
CREATE TABLE `TICKER_HISTORY_PRICE_DATA` (
   `price_data_ticker` char(8) NOT NULL default '',
   `price_data_date` date NOT NULL default '-00-00',
   `price_data_open` float default NULL,
   `price_data_high` float default NULL,
   `price_data_low` float default NULL,
   `price_data_close` float default NULL,
   `price_data_volume` float default NULL,
   UNIQUE KEY `tidadx` (`price_data_ticker`,`price_data_date`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1 |
 
 As you can see, Unique KEY is on two first fields, but most of the 
 work, joins  searches, will be done on the second field
price_data_date.

As others have pointed out, your UNIQUE KEY on
(price_data_ticker,price_data_date) will serve as an index to speed
queries which search for a specific value of price_data_ticker and
queries which search for a specific combination of values of
price_data_ticker and price_data_date, but it won't help queries which
only search by price_data_date.  Yet, most of the work, joins 
searches, will be done on the second field, price_data_date. 
  In that case, you definitely need an index on price_data_date.  Based
on your description, I'd suggest you have your index backwards.  What
you need is an index on (price_data_date, price_data_ticker).  This will
satisfy searches on price_data_date and on combinations of the two.
Hence,

   ALTER TABLE TICKER_HISTORY_PRICE_DATA
   DROP INDEX tidadx,
   ADD PRIMARY KEY (price_data_date, price_data_ticker);

That will satisfy most of your queries.  Then, the question becomes, do
you need a separate, single-column index on price_data_ticker?  That
will depend on whether you run queries which select based on
price_data_ticker without specifying price_data_date.

Michael



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: A bit of SQL help for a MySQL novice.

2005-11-17 Thread Mikhail Berman
Hi Rick,

Below are some MySQL functions that might help with your problem. 

Sorry, for not much of direct answer. 

But the idea is that you can combine some of these (string) functions -
http://dev.mysql.com/doc/refman/5.0/en/string-functions.html to parse
out the string you are looking for. If you are programming in one of
Unix, you could also pull your answer into UNIX script to parse it out
what ever you need.


I.

CONCAT_WS(separator, str1, str2,...) 

CONCAT_WS(separator, str1, str2,...)
CONCAT_WS() stands for CONCAT With Separator and is a special form of
CONCAT(). The first argument is the separator for the rest of the
arguments. The separator is added between the strings to be
concatenated. The separator can be a string as can the rest of the
arguments. If the separator is NULL, the result is NULL. The function
skips any NULL values after the separator argument. 
mysql SELECT CONCAT_WS(',',
   'First name','Second name','Last Name');
- 'First name,Second name,Last Name'
mysql SELECT CONCAT_WS(',','First name',NULL,'Last Name');
- 'First name,Last Name'

Before MySQL 4.0.14, CONCAT_WS() skips empty strings as well as NULL
values. 

II.

FIND_IN_SET(str,strlist) 

FIND_IN_SET(str,strlist)
Returns a value 1 to N if the string str is in the string list strlist
consisting of N substrings. A string list is a string composed of
substrings separated by `,' characters. If the first argument is a
constant string and the second is a column of type SET, the
FIND_IN_SET() function is optimized to use bit arithmetic. Returns 0 if
str is not in strlist or if strlist is the empty string. Returns NULL if
either argument is NULL. This function will not work properly if the
first argument contains a comma (`,') character. 
mysql SELECT FIND_IN_SET('b','a,b,c,d');
- 2

III.

INSTR(str,substr)
Returns the position of the first occurrence of substring substr in
string str. This is the same as the two-argument form of LOCATE(),
except that the arguments are swapped. 
mysql SELECT INSTR('foobarbar', 'bar');
- 4
mysql SELECT INSTR('xbar', 'foobar');
- 0

This function is multi-byte safe. In MySQL 3.23, this function is case
sensitive. For 4.0 on, it is case sensitive only if either argument is a
binary string. 

IV.

LEFT(str,len) 

LEFT(str,len)
Returns the leftmost len characters from the string str. 
mysql SELECT LEFT('foobarbar', 5);
- 'fooba'


V

LENGTH(str) 

LENGTH(str)
Returns the length of the string str, measured in bytes. A multi-byte
character counts as multiple bytes. This means that for a string
containing five two-byte characters, LENGTH() returns 10, whereas
CHAR_LENGTH() returns 5. 
mysql SELECT LENGTH('text');
- 4


Mikhail Berman

-Original Message-
From: Peter Brawley [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 17, 2005 1:51 PM
To: Rick Dwyer
Cc: mysql@lists.mysql.com
Subject: Re: A bit of SQL help for a MySQL novice.

Rick

 I need to read the first 4 positions in the phone number to determine
it's location.

 My statement looks like this:
 'Select mid(phone, 1,4) as phoneareacode from phonetable'

 This works but if the number is entered as 1(203)-555-1212 the above  
would return   1(20 which is not what I am looking for.

You need an unpunct() function. Not available in 4 or 5, easy to write
in 5.0 as a stored function, not hard to add as a 'C' udf in 4.1 if you
write 'C'. Since it's a common requirement, likely someone has written
it. Failing that, you may be stuck with the absurd
replace(replace(replace(replace(replace(@s,'(',''),')',''),' 
',''),'-',''),'.','').

PB

-

Rick Dwyer wrote:

 Hello All.

 I am hoping for a bit of help with some code that has really given me 
 some trouble.  If this is not he correct forum for this any help in 
 pointing me to a more suited list would be appreciated.

 I have a MySQL 4.1.x database containing records with phone numbers.

 Most of the phone numbers are enter in 12035551212 format, but some 
 are entered with spaces or - or ( or other characters.

 I need to read the first 4 positions in the phone number to determine 
 it's location.

 My statement looks like this:
 'Select mid(phone, 1,4) as phoneareacode from phonetable'

 This works but if the number is entered as 1(203)-555-1212 the above  
 would return   1(20 which is not what I am looking for.

 Is there a way to have the select statement examine only numeric 
 values in the phone number so it would disregard the other charcters?

 In Lasso, you can use a Replace with a Regular Expression function to 
 have just the digits 0-9 examined but haven't been able find  a way to

 do this in SQL.

 Any help is appreciated.
 Thank you.
 Rick










--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.362 / Virus Database: 267.13.3/173 - Release Date:
11/16/2005


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com

A key question

2005-11-17 Thread Mikhail Berman
Hello everyone,
 
Is it possible or  makes sense to key a field that is a part of Unique
Index already?
 
Regards and thank you
 
Mikhail Berman


RE: A key question

2005-11-17 Thread Mikhail Berman
Dear Jasper,

I may not have been precise in my question, but the Unique Index in
question is a two fields index, and I was looking to find out wisdom
from the List if there is sense and/or experience in keying second(left)
field on in the Unique Index to speed up a search.
I am dealing with 32M rows table, where second field in the Unique Index
is a date field. Unfortunately for my simple SELECT MAX(Date) as
Latest_Date from THE_TABLE took 4 minutes and some seconds, so before I
will go and buy bigger server I needed to re-assure myself that there is
no other way.

I would not bother the List without a good reason and doing what you
said before

Regards,

Mikhail Berman

-Original Message-
From: Jasper Bryant-Greene [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 17, 2005 4:19 PM
To: Mikhail Berman
Cc: mysql@lists.mysql.com
Subject: Re: A key question

Mikhail Berman wrote:
 Is it possible or  makes sense to key a field that is a part of Unique

 Index already?

It's possible, but it doesn't make sense. A unique index is a normal
index with an added unique constraint. Adding another index on the same
field would make no sense (unless the field is a rightmost part of a
unique index).

Your question would have been answered by checking the archives less
than a week back (this question comes up a lot) or reading the manual.

Jasper

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How do I?

2005-11-15 Thread Mikhail Berman
Hello everyone,
 
I have changed a value of  max_allowed_packet in  my.cnf. 
 
Do I need to restart MySQL itself for the change to take place or is
there a way to refresh this variable (parameter) without restart of
MySQL?
 
I have looked through documentation and www.mysql.com and could not find
the answer.
 
Below is info on my environment:
 
mysql status
--
mysql  Ver 14.7 Distrib 4.1.9, for sun-solaris2.8 (sparc)
 
Connection id:  7230
Current database:
Current user:   xx
SSL:Not in use
Current pager:  stdout
Using outfile:  ''
Using delimiter:;
Server version: 4.1.9-standard
Protocol version:   10
Connection: Localhost via UNIX socket
Server characterset:latin1
Db characterset:latin1
Client characterset:latin1
Conn.  characterset:latin1
UNIX socket:/tmp/mysql.sock
Uptime: 12 days 11 hours 48 min 22 sec
 
Threads: 1  Questions: 47908924  Slow queries: 983  Opens: 4964  Flush
tables: 1  Open tables: 275  Queries per second avg: 44.389

 
Thank you in advance,
 
Mikhail Berman


RE: How do I?

2005-11-15 Thread Mikhail Berman


Thank you Sujay and everyone else for your help 


Mikhail Berman

-Original Message-
From: Sujay Koduri [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 15, 2005 11:03 AM
To: Mikhail Berman; mysql@lists.mysql.com
Subject: RE: How do I?


Yes, you can change it at runtime by doing the following

SET GLOBAL max_allowed_packet = 1024 (or whatever size);

Not only this varaible, whatever variables are listed here
http://dev.mysql.com/doc/refman/5.0/en/dynamic-system-variables.html
,you can change them at runtime without restarting the server.

Have a look at this also
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html

sujay
-Original Message-
From: Mikhail Berman [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 15, 2005 8:58 PM
To: mysql@lists.mysql.com
Subject: How do I?

Hello everyone,
 
I have changed a value of  max_allowed_packet in  my.cnf. 
 
Do I need to restart MySQL itself for the change to take place or is
there a way to refresh this variable (parameter) without restart of
MySQL?
 
I have looked through documentation and www.mysql.com and could not find
the answer.
 
Below is info on my environment:
 
mysql status
--
mysql  Ver 14.7 Distrib 4.1.9, for sun-solaris2.8 (sparc)
 
Connection id:  7230
Current database:
Current user:   xx
SSL:Not in use
Current pager:  stdout
Using outfile:  ''
Using delimiter:;
Server version: 4.1.9-standard
Protocol version:   10
Connection: Localhost via UNIX socket
Server characterset:latin1
Db characterset:latin1
Client characterset:latin1
Conn.  characterset:latin1
UNIX socket:/tmp/mysql.sock
Uptime: 12 days 11 hours 48 min 22 sec
 
Threads: 1  Questions: 47908924  Slow queries: 983  Opens: 4964  Flush
tables: 1  Open tables: 275  Queries per second avg: 44.389

 
Thank you in advance,
 
Mikhail Berman

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Creating a table on MySQL from Access

2005-10-18 Thread Mikhail Berman
Possible solutions. Has not tried all of it myself.

1. Create ODBC connection to your MySQL database, using MySQL
Connector/ODBC - this works, sometimes gives problems
2. Try to use MS-Access Pass-Through query to create tables - never
tried this way. 
3. Do search for link tables in MS Visual Basic Help for Ms-Access to
find a way to link the tables. You might not need to link tables.
Possible solution that came out of the named above search:

VB HELP
SourceConnectStr, SourceDatabase Properties
See AlsoApplies ToExampleSpecificsYou can use these properties to access
external data when you can't link the external tables to your database.

The SourceConnectStr property specifies the name of the application used
to create an external database. 
The SourceDatabase property specifies the external database in which the
source tables or queries for a query reside. 
Note  The SourceConnectStr and SourceDatabase properties apply to all
queries except data-definition, pass-through, and union queries.

Setting
You use a string expression to set the value of the SourceConnectStr and
SourceDatabase properties.

You can set these properties in the query's property sheet or in SQL
view of the Query window. In the SQL statement, the properties
correspond to the IN clause.


Note  If you are accessing multiple database sources, use the Source
property instead of the SourceConnectStr and SourceDatabase properties.


Remarks
You must use the SourceConnectStr and SourceDatabase properties to
access tables from external databases that were created in applications
that don't use linked tables (linked table: A table stored in a file
outside the open database from which Access can access records. You can
add, delete, and edit records in a linked table, but you cannot change
its structure.).

The following are examples of these property settings:

For a Microsoft Access database, the SourceDatabase property setting is
the path and database name (for example, C:\Accounts\Customers).
Microsoft Access adds the file name extension automatically. The
SourceConnectStr property doesn't have a value for a Microsoft Access
database. 
For a database created by a product such as Paradox, the SourceDatabase
property setting is the path (for example, C:\Pdoxdata). The
SourceConnectStr property setting is the specifier for the database type
(for example, Paradox 3.x;). For a list of specifiers, see the ADO
Connect property. 
The following example uses dBASE IV tables in the C:\Dbdata directory as
the source of data for the query. 
SELECT Customer.COMPANYNAM, Orders.ORDERID, Orders.ORDERDATE
FROM Customer INNER JOIN Orders
ON Customer.CUSTOMERID = Orders.CUSTOMERID
IN 'C:\Dbdata'[dBASE IV;];


For an Open Database Connectivity (ODBC) database, the SourceConnectStr
property setting is the name of the source database and any additional
information required by the product, such as a logon identification (ID)
and password. For example, for a Microsoft SQL Server database the
setting might be: 
ODBC;DSN=salessrv;UID=jace;PWD=password;DATABASE=sales;

The SourceDatabase property doesn't have a value for an ODBC database.
VB HELP


Mikhail Berman

-Original Message-
From: Scott Hamm [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 18, 2005 9:46 AM
To: 'Mysql '
Subject: Creating a table on MySQL from Access

Is there a way I can create a table on MySQL using Access and establish
a link remotely?
--
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



What are the reasons?

2005-02-01 Thread Mikhail Berman
Hi everybody,

What are the possible reasons for runaway MySQL process on Sun Solaris, beyond 
joins on fields with NULLs.

Regards,

Mikhail Berman