On 2015-02-15 23:55, Learner Study wrote:
> Hello experts,
>
> Is it possible for MySQL server to automatically merge responses for
> different queries into a single response? Are there any kernel
> parameters that may dictate that?
"UNION is used to combine the result from multiple SELECT statem
On 2013-06-27 01:27, nixofortune wrote:
Now importing with Keys in place. It takes longer, much longer but at
least the server is working and customers do not complaint.
Schema design is awful, agree. I try to understand the process so will
redesign it soon, but any suggestions are welcome.
I'
On 2013-06-26 18:31, nixofortune wrote:
> What would be the best way to convert BIG MyISAM table into InnoDB? We do not
> have SLAVE.
I would do it on another computer. Then copy the table to the server and then
add the data that has been added from the original table.
And/or i would experiment w
Should i disable HyperThreading on an Intel Xeon 8-core CPU or leave it on?
On older versions of MySQL i read that it should be disabled but with
the never versions MySQL is said to handle multiple cores/CPUs better
but i cant find anything on HT to be beneficial or not.
MySQL 5.5.10+, 24GB DD
On 2011-02-14 15:43, Singer X.J. Wang wrote:
So I'm assuming OLTP type transaction, then I'm going to recommend
MySQL 5.5.
Why is that flavor to be chosen over MariaDB with XtraDB or Percona with
XtraDB?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubsc
On 2011-02-14 15:31, Singer X.J. Wang wrote:
What is your load type?
Heavy read but enough write not to benefit much from query cache. It is
a webshop app (custom).
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?u
We are about to migrate from MySQL 4.1 to a 5.5 version. We heavily use
InnoDB, have an dual quad Nahelem Xeon, 24GB DDR3, 4*SSD in RAID-10 on
an Adaptec RAID with 512MB Cache and running under x64 Linux on a modern
kernel. We replicate to several other slaves.
I only have experience on vanill
On 2010-05-09 13:29, Prabhat Kumar wrote:
INSERT INTO myTable_info (id,range, total_qt, qt_correct, finish_time,
username, datestamp) VALUES (NULL,'Kumar', '20', '17', '111', 'Prabhat','*
NOW()');*
Last_SQL_Error: Error 'You have an error in your SQL syntax; check the
manual that corresponds to
Gavin Towey wrote:
What Shawn said is important.
Better options:
1. Use InnoDB, and then you can make a consistent backup with `mysqldump
--single-transaction > backup.sql` and keep your db server actively responding
to requests at the same time.
2. Use something like LVM to create filesytem
ishaq gbola wrote:
Thanks a lot for that, but where does this file get saved in and how can i copy
it to my local host if the database is on a remote server
If you don't specify the absolute location it can be find in
"DATADIR/DatabaseName/". And after you located the file you have a
multit
ishaq gbola wrote:
Hi all,
I would like to know if there is a tool or command in mySQL that allows one to
export the result of query into excel formart
select * from table into outfile "thefile.txt";
That can be imported into excel using CSV and using "TAB" as separator.
http://code.anjanes
Madison Kelly wrote:
Hi all,
I've got a fairly large set of databases I'm backing up each Friday.
The dump takes about 12.5h to finish, generating a ~172 GB file. When
I try to load it though, *after* manually dumping the old databases,
it takes 1.5~2 days to load the same databases. I am gue
D. Dante Lorenso wrote:
All,
I am using MySQL currently, but am starting to think that maybe I
don't really need to use an RDBMS. The data I am storing ends up
getting indexed with Sphinx because I have full-text indexes for about
40 million records.
I have an "items" table that is heavily
Tompkins Neil wrote:
Following my previous email. I've now configured my database connection
using a ODBC DNSLESS SSL connection. However the problem still remains, the
password is stored in the ASP file in plain text. Does anyone have any
recommendations on how to overcome this issue ?
Sec
Krishna Chandra Prajapati wrote:
Hi Experts,
I have a crm table where 12 millions records inserted/day. We are running
report queries on this table and using partitioning features for faster
results. we have to maintain 45 days data means 540million records. As per
my calculation 540 records wil
Or :
alter table users add first_f_name char(1) not null;
create index first_f_name_idx on users (first_f_name);
update users set first_f_name = left(first_name,1);
And not the query will use index.
select username from users where first_f_name between "A" and "B";
--
MySQL General Mailing List
Dave M G wrote:
MySQL,
This should be a fairly simple question.
I have a table with a bunch of people's names. I want to find people
who's name begins within a certain range of characters.
All names between F and P, for example.
What SELECT statement would I use to do that?
Thank you for any
sudhir543-nima...@yahoo.com wrote:
I have come across a requirement where I need to store a very large amount of data in a table.
In
one of our app.. we can have around 50 Million records each year.. Can
any one guide me in choosing a strategy than can handle this load.
50M records is not th
Eric Bergen wrote:
Jay,
Are you using the replicate-do-db option on the slave? This option
relies on 'use' being set correctly when the query is issued. A quote
from the manual explains it better than I can:
"Tells the slave to restrict replication to statements where the
default database (that i
I have a problem with an update query not replicating through to the slave.
The query is "update content_review_site as a,site_rating_factors as b set
a.overall_rating = 77 where a.content_id=243"
Version : 4.0.22
OS : Linux X86
How to replicate the error.
CREATE TABLE content_review_site (
con
20 matches
Mail list logo