Re: Merging multiple SQL requests

2015-02-21 Thread Jay Ess
On 2015-02-15 23:55, Learner Study wrote:
 Hello experts,
 
 Is it possible for MySQL server to automatically merge responses for
 different queries into a single response? Are there any kernel
 parameters that may dictate that?

UNION is used to combine the result from multiple SELECT statements into a
single result set.
http://dev.mysql.com/doc/refman/5.0/en/union.html


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




Re: space gone after MyISAM REPAIR TABLE

2013-06-26 Thread Jay Ess
On 2013-06-26 18:31, nixofortune wrote:
 What would be the best way to convert BIG MyISAM table into InnoDB? We do not
 have SLAVE.

I would do it on another computer. Then copy the table to the server and then
add the data that has been added from the original table.

And/or i would experiment with TokuDB. I havent had the time to do it myself but
will probably soon. I am too looking for a lengthy 1 billion+ row conversion.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: space gone after MyISAM REPAIR TABLE

2013-06-26 Thread Jay Ess

On 2013-06-27 01:27, nixofortune wrote:
Now importing with Keys in place. It takes longer, much longer but at 
least the server is working and customers do not complaint.
Schema design is awful, agree. I try to understand the process so will 
redesign it soon, but any suggestions are welcome.

I' not a MySQL super guru so will be glad for hear your sorts, guys.
Thanks
You could probably reduce your table size a LOT by breaking out 
keyword and source to their own tables and reference them.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Intel Hyperthreading benefits on MySQL 5.5.10+

2011-04-03 Thread Jay Ess

Should i disable HyperThreading on an Intel Xeon 8-core CPU or leave it on?
On older versions of MySQL i read that it should be disabled but with 
the never versions MySQL is said to handle multiple cores/CPUs better 
but i cant find anything on HT to be beneficial or not.


MySQL 5.5.10+, 24GB DDR 3 RAM, 6 * SSD RAID-10 on Adaptec card, Linux 2.6.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



What MySQL-flavor to choose.

2011-02-14 Thread Jay Ess
We are about to migrate from MySQL 4.1 to a 5.5 version. We heavily use 
InnoDB, have an dual quad Nahelem Xeon, 24GB DDR3, 4*SSD in RAID-10 on 
an Adaptec RAID with 512MB Cache and running under x64 Linux on a modern 
kernel. We replicate to several other slaves.


I only have experience on vanilla MySQL-versions (compile my own). What 
flavor (MariaDB, MySQL, Percona) should i choose and why?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: What MySQL-flavor to choose.

2011-02-14 Thread Jay Ess

On 2011-02-14 15:31, Singer X.J. Wang wrote:

What is your load type?


Heavy read but enough write not to benefit much from query cache. It is 
a webshop app (custom).


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: What MySQL-flavor to choose.

2011-02-14 Thread Jay Ess

On 2011-02-14 15:43, Singer X.J. Wang wrote:
So I'm assuming OLTP type transaction, then I'm going to recommend 
MySQL 5.5.


Why is that flavor to be chosen over MariaDB with XtraDB or Percona with 
XtraDB?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Out of range value for column 'datestamp' at row 1

2010-05-10 Thread Jay Ess

On 2010-05-09 13:29, Prabhat Kumar wrote:

INSERT INTO myTable_info (id,range, total_qt, qt_correct, finish_time,
username, datestamp) VALUES (NULL,'Kumar', '20', '17', '111', 'Prabhat','*
NOW()');*

Last_SQL_Error: Error 'You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version for the right syntax

I think problem with* `datestamp` datetime NOT NULL default '-00-00
00:00:00',*

Can any one please suggest me, how to deal with this error.



Remove the ' around NOW(). ' Makes NOW() a literal string and not a function 
call.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: better way to backup 50 Gig db?

2010-04-20 Thread Jay Ess

Gavin Towey wrote:

What Shawn said is important.

Better options:
1. Use InnoDB, and then you can make a consistent backup with `mysqldump 
--single-transaction  backup.sql`  and keep your db server actively responding 
to requests at the same time.

2. Use something like LVM to create filesytem snapshots which allow you to 
backup your database, while only keeping a read lock on the db for a second or 
so.
  
3. Set up replication and backup the replicated data using any of the 
above method.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Exporting the result of a Query into excel

2010-01-05 Thread Jay Ess

ishaq gbola wrote:

Hi all,

I would like to know if there is a tool or command in mySQL that allows one to 
export the result of query into excel formart
  

select * from table into outfile thefile.txt;
That can be imported into excel using CSV and using TAB as separator.
http://code.anjanesh.net/2007/12/export-mysql-data-to-excel-compatible.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Exporting the result of a Query into excel

2010-01-05 Thread Jay Ess

ishaq gbola wrote:

Thanks a lot for that, but where does this file get saved in and how can i copy 
it to my local host if the database is on a remote server
  
If you don't specify the absolute location it can be find in 
DATADIR/DatabaseName/. And after you located the file you have a 
multitude of choice how to transfer the file. scp,ftp,http,mail all 
depending on what's installed on the server and what access you got to it.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing large databases faster

2009-12-17 Thread Jay Ess

Madison Kelly wrote:

Hi all,

I've got a fairly large set of databases I'm backing up each Friday. 
The dump takes about 12.5h to finish, generating a ~172 GB file. When 
I try to load it though, *after* manually dumping the old databases, 
it takes 1.5~2 days to load the same databases. I am guessing this is, 
at least in part, due to indexing.


My question is; Given an empty target DB and a dump file generated via:

ssh r...@server mysqldump --all-databases -psecret  
/path/to/backup.sql
I use the -e -v -f -q -Q -K parameters for the mysqldump on large 
tables/databases. It does what you are asking for. Disables the key 
generation until all of the data is inserted. It also uses multi insert 
statements and not individual insert statement for every row which 
speeds up things considerable.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to not lock anything?

2009-12-15 Thread Jay Ess

D. Dante Lorenso wrote:

All,

I am using MySQL currently, but am starting to think that maybe I 
don't really need to use an RDBMS.  The data I am storing ends up 
getting indexed with Sphinx because I have full-text indexes for about 
40 million records.


I have an items table that is heavily updated with 40 million 
records every 1 or 2 days and I need all those items indexed so they 
can be searched.  The problem that I'm having is that the table is 
constantly locked because an insert or delete is being performed.


I am playing with InnoDB vs MyIsam and have been trying to figure out 
how to get the best performance.  I actually don't care about dirty 
reads, however, and wouldn't mind if all the 40 mm records could be 
read/inserted/updated/deleted without any locking at all.  Are there 
known solutions for the kind of storage I am looking for?  Anyone have 
any pointers?  Is there a MySQL Storage Engine designed for this kind 
of usage, or is there a another server that is commonly used along 
with MySQL for this type of thing?

Double buffering :
Have two identical tables.
Update to the non active and when ready make this table the active.
Now do the same updates to the old now nonactive table while the new 
active table can be read pretty much without disturbance.

Make the two tables reside on separate disks if you dont have enough IO.
Sure its dirty but it works.

If you entirally rebuild your datasets from scratch use this approach :
Create an empty table from live table definition (CREATE TABLE tmp 
SELECT * FROM livetable limit 0;)

Now rebuild your dataset to table tmp.
Drop live table.
Rename tmp table to live table name.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: DELETE DATA FROM TABLE

2009-11-19 Thread Jay Ess

Krishna Chandra Prajapati wrote:

Hi Experts,

I have a crm table where 12 millions records inserted/day. We are running
report queries on this table and using partitioning features for faster
results. we have to maintain 45 days data means 540million records. As per
my calculation 540 records will use 1.8 TB of disk space. Total disk space
available is 2.3TB.

Deleting data doesn't free up the disk space. So, I was thinking of rotating
the table. But doesn't have enough disk space.

Any Idea, how this task can be performed.

Any idea or suggestion is highly appreciated.
  
The space is freed inside the table space but is not seen on disk. Use 
show table status to show a tables data_free variable.
If you prompt want to free the space so you can see it on the file 
system you can use optimize table command. But the operation can be 
slow and the table will be locked.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Fwd: ODBC MySQL Password as plain text

2009-11-19 Thread Jay Ess

Tompkins Neil wrote:

Following my previous email.  I've now configured my database connection
using a ODBC DNSLESS SSL connection. However the problem still remains, the
password is stored in the ASP file in plain text.  Does anyone have any
recommendations on how to overcome this issue ?
  

Secure the access to the ASP-source file.
You *could* encrypt it but then you have to store the key for it 
somewhere the ASP can access and . Catch 22.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Alphabetical search to and from

2009-11-04 Thread Jay Ess

Dave M G wrote:

MySQL,

This should be a fairly simple question.

I have a table with a bunch of people's names. I want to find people
who's name begins within a certain range of characters.

All names between F and P, for example.

What SELECT statement would I use to do that?

Thank you for any advice.

  

Slow version (no use of index) :
select username from users where left(username,1) between A and B;

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Alphabetical search to and from

2009-11-04 Thread Jay Ess

Or :
alter table users add first_f_name char(1) not null;
create index first_f_name_idx on users (first_f_name);
update users set first_f_name = left(first_name,1);

And not the query will use index.
select username from users where first_f_name between A and B;

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: 50M records each year, help me choosing the stretegy

2009-11-02 Thread Jay Ess

sudhir543-nima...@yahoo.com wrote:
I have come across a requirement where I need to store a very large amount of data in a table. 
In

one of our app.. we can have around 50 Million records each year.. Can
any one guide me in choosing a strategy than can handle this load. 
  
50M records is not that bad if you only store a couple of bytes in every 
row. So please describe your tables in more detail.

And also describe the expected access on the data.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication update bug/error/problem.

2004-12-08 Thread Jay Ess
Eric Bergen wrote:
Jay,
  Are you using the replicate-do-db option on the slave? This option
relies on 'use' being set correctly when the query is issued. A quote
from the manual explains it better than I can:
Tells the slave to restrict replication to statements where the
default database (that is, the one selected by USE) is db_name. To
specify more than one database, use this option multiple times, once
for each database. Note that this will not replicate cross-database
statements such as UPDATE some_db.some_table SET foo='bar' while
having selected a different database or no database
URL:http://dev.mysql.com/doc/mysql/en/Replication_Options.html
Other possibilities are to use show slave status; and show master
status; to make sure queries are actually being sent from the master
to the slave.
I am not using cross database updates. It is all on one database but the update 
uses two tables.
The query update content_review_site as a,site_rating_factors as b set 
a.overall_rating = 77 where a.content_id=243 is a stripped down version of a 
bigger but i stripped down to the point of failing. The failing factor is when i 
use content_review_site as a,site_rating_factors as b (not a cross database 
but a cross table query).

And i am using replicate-do-table on both the tables in the query.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Replication update bug/error/problem.

2004-12-07 Thread Jay Ess
I have a problem with an update query not replicating through to the slave.
The query is update content_review_site as a,site_rating_factors as b set 
a.overall_rating = 77 where a.content_id=243

Version : 4.0.22
OS : Linux X86
How to replicate the error.
CREATE TABLE content_review_site (
  content_id int(11) unsigned NOT NULL default '0',
  site_id int(10) unsigned NOT NULL default '0',
  overall_rating float(4,2) unsigned NOT NULL default '0.00',
  rating_software_and_graphics tinyint(4) unsigned NOT NULL default '0',
  rating_game_variety tinyint(4) unsigned NOT NULL default '0',
  rating_maximum_bonus tinyint(4) unsigned NOT NULL default '0',
  rating_bonus_match tinyint(4) unsigned NOT NULL default '0',
  rating_wagering_requirements tinyint(4) unsigned NOT NULL default '0',
  rating_payout_ratio tinyint(4) unsigned NOT NULL default '0',
  rating_multiplayer_games tinyint(4) unsigned NOT NULL default '0',
  rating_loyalty_bonus tinyint(4) unsigned NOT NULL default '0',
  rating_deposit_withdrawal_options tinyint(4) unsigned NOT NULL default '0',
  rating_support tinyint(4) unsigned NOT NULL default '0',
  rating_progressive_jackpots tinyint(4) unsigned NOT NULL default '0',
  rating_languages tinyint(4) unsigned NOT NULL default '0',
  rating_reputation_fairness tinyint(4) unsigned NOT NULL default '0',
  rating_free_bonus tinyint(4) unsigned NOT NULL default '0',
  create_ts datetime NOT NULL default '-00-00 00:00:00',
  modify_ts datetime NOT NULL default '-00-00 00:00:00',
  PRIMARY KEY  (content_id,site_id)
) TYPE=MyISAM;
CREATE TABLE site_rating_factors (
  site_id int(11) unsigned NOT NULL default '0',
  factor_software_and_graphics tinyint(4) unsigned NOT NULL default '0',
  factor_game_variety tinyint(4) unsigned NOT NULL default '0',
  factor_bonus_amount tinyint(4) unsigned NOT NULL default '0',
  factor_bonus_clauses tinyint(4) unsigned NOT NULL default '0',
  factor_payout_ratio tinyint(4) unsigned NOT NULL default '0',
  factor_multiplayer_games tinyint(4) unsigned NOT NULL default '0',
  factor_loyalty_bonus tinyint(4) unsigned NOT NULL default '0',
  factor_deposit_withdrawal_options tinyint(4) unsigned NOT NULL default '0',
  factor_support tinyint(4) unsigned NOT NULL default '0',
  factor_graphics tinyint(4) unsigned NOT NULL default '0',
  factor_progressive_jackpots tinyint(4) unsigned NOT NULL default '0',
  factor_languages tinyint(4) unsigned NOT NULL default '0',
  factor_reputation_fairness tinyint(4) unsigned NOT NULL default '0',
  factor_free_bonus tinyint(4) unsigned NOT NULL default '0',
  create_ts datetime NOT NULL default '-00-00 00:00:00',
  modify_ts datetime NOT NULL default '-00-00 00:00:00',
  PRIMARY KEY  (site_id)
) TYPE=MyISAM;
insert into site_rating_factors values (1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,now(),now());
insert into content_review_site values 
(243,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,now(),now());

On the slave get the data with load data from master.
Both slave and master shows :
+++
| content_id | overall_rating |
+++
|243 |   1.00 |
+++
update content_review_site set overall_rating = 666;
updates the tables just fine on both slave and master.
But the troubling query below does NOT it only updates on the master.
update content_review_site as a,site_rating_factors as b set a.overall_rating = 
77 where a.content_id=243;


Any suggestions?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]