Can someone tell me if what I'm trying to do can be done more efficiently? I
just got off the phone with Rackspace when my server was hung up, and they
found a whole bunch of this one same query was all stacked up with waiting
queries and locked.
Here's the query:
$query = insert ignore into
Hey all -
I've just uploaded some free Canada sample data to complement the US data that
was already available. Testing apps with a representation amount of sample data
is crucial to evaluate performance. Download the data here:
http://www.briandunning.com/sample-data/
Enjoy,
- Brian
--
MySQL
Hey all -
I have a table with 12,000,000 records spread over about 6 years. I'm trying to
delete all but the last 2 years, but no matter how small of a group I try to
delete at a time, it keeps hanging up the server and I eventually have to
restart MySQL. The table looks like this:
My brain just squirted out my ear trying to figure this out, so maybe one of
you can help me scoop it back up.
I have a table of accounts, and a table of hits-per-day per account. I'm trying
to sort my accounts by the most hits-per-day, averaged over the preceding 7
days (8 days ago through
You're exactly right, that's an important point that I neglected when putting
together my example. Good catch.
On Mar 18, 2010, at 11:27 AM, Chris W wrote:
I think I would change the math. Since there are several days in there where
there are no hits, that should in my opinion count
Hey all -
I have a table listing references for chapters in a book. I'm trying to find
all the cases where a single chapter lists more than one reference from the
same author. In this example table, I want it to find IDs 1 and 2, because
they're both from the same author, and both in chapter 1
, exchange, so will work for mapping, phone or
address validation, whatever your needs are. Hope someone find it useful.
http://www.briandunning.com/sample-data/
On Jan 28, 2010, at 3:52 PM, Brian Dunning wrote:
I need a few million sample contact records - name, company, address, email,
web
Hey all -
I need a few million sample contact records - name, company, address, email,
web, phone, fax. ZIP codes and area codes and street addresses should be
correct and properly formatted, but preferably not real people or companies or
email addresses. But they'd work if you did address
Darn, it's not working after all.
SELECT
people.*,
COUNT ( DISTINCT cars.car_id ) AS car_count,
COUNT ( DISTINCT pets.pet_id ) AS pet_count,
SUM ( IF ( pets.date_bought NOW() - INTERVAL 7 DAY, 1, 0 ) ) AS
new_pet_count
WHERE...etc
car_count and pet_count are calculating correctly, but
I thought I could find the answer to this by googling, but had no luck.
How do I show a list of customers who:
(a) Have placed no orders within 14 days,
(b) Have been a customer for at least 14 days. (They do not have to
have placed any orders, ever, to be a customer.)
I'm trying to show
select into outfile was the ticket. Thanks!
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
More info. If the tables look like this:
customers
--
cust_id
signup_date
orders
--
order_id
cust_id
order_date
Then the SQL needs to accomplish something like this:
select * from customers
left join orders on customers.cust_id = orders.cust_id
where
Thanks. I added orders.order_date to the list of fields being
selected, but it still returns invalid use of group function.
On Nov 3, 2009, at 1:09 PM, Martin Gainty wrote:
any column used in any group functions such as SUM needs to be
requested in select column e.g.
select foo from
Thanks, this solved it!
On Nov 2, 2009, at 12:37 AM, Michael Dykman wrote:
I suspect 'distinct' might help you out here.
SELECT
people.*,
count(distinct cars.car_id) as car_count,
count(distinct pets.pet_id) as pet_count
--
MySQL General Mailing List
For list archives:
Johnny - Your solution might actually help me solve my next step,
which is to also return a count of pets bought only within the last 7
days. Something like this:
SELECT people.*,
SUM(IF(cars.id IS NULL,0,1)) AS car_count,
SUM(IF(pets.id IS NULL,0,1)) AS pet_count,
SUM ( IF (
Hi all -
I have a table of PEOPLE, and a table of CARS owned by various people,
and a table of PETS owned by various people. Each person may have 0 or
more pets, and each person may have 0 or more cars. I'm trying to
return a list of all the people, showing how many pets each person
has,
Sorry if this is a n00b question, I did read the docs for mysqldump
before posting, but couldn't figure it out.
I'm successfully doing a mysqldump, but I want only a few of the
columns, and want them in a different order. Is this possible without
substantially slowing it down? The table
I have a table of projects with several columns for the IDs of some
people in various roles, and then a table of the people. How can I get
all the people for a given project, but keep their roles straight?
Projects
--
Project_ID
Boss_ID
Admin_ID
Assistant_ID
People
ID
Name
Pretend I'm Netflix and I want to return a list of found movies,
including the average of related ratings for each movie. Something
like this:
select movies.*, average(ratings.rating) from movies, ratings where
movies.movie_id=ratings.movie_id
I'm sure that's wrong in about 10 different
) FROM movies
INNER JOIN ratings ON movies.movie_id=ratings.movie_id
GROUP BY movies.movie_id
Change the INNER JOIN to a LEFT JOIN if you want all movies, even
those with no ratings.
Brent Baisley
On Mon, Dec 22, 2008 at 11:13 AM, Brian Dunning br...@briandunning.com
wrote:
Pretend I'm Netflix
= '$meal_date';
-Micah
On 09/05/2008 03:41 PM, Brian Dunning wrote:
I'm trying to calculate glycemic index of all food items eaten in a
day with a single SELECT. The problem is the calculation for
glycemic index of each item requires a total of all items' carbs.
It's like a circular reference
How do I query How many calories did Brian eat on 2009-09-04?
Table:People
+---+---+
+ Person_ID + Name |
+---+---+
| 5 | Brian |
+---+---+
Table:Meals
+-+---+---++
| Meal_ID | Person_ID | Meal_Name | Date |
Thanks, that was exactly what I needed. :-)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I'm trying to calculate glycemic index of all food items eaten in a
day with a single SELECT. The problem is the calculation for glycemic
index of each item requires a total of all items' carbs. It's like a
circular reference. Here's what I'm trying:
SELECT
I have a table like this:
unique_serial - Auto Increment
field_1
field_2
field_3
The Primary key is a combination of field_1, field_2, and field_3.
I want to do:
INSERT IGNORE INTO table_name (field_1,field_2,field_3) VALUES
('xx','xx','xx')
Sometimes this will be an existing record,
Can someone spot a way to improve the performance of this query? I see
that every time it runs, it's Copying to tmp table and then
Creating sort index and taking way too long.
select count(distinct(stats.ip)) as popcount, stats.id, episodes.title
from stats, episodes where
I am an idiot. table_a and table_b have exactly the same structure.
How do I say this in SQL:
INSERT (all records from table_a) into table_b where table_a.customer
= '12'
Just trying to eventually duplicate the whole table, one customer's
set of records at a time. Thanks.
--
MySQL
Thanks to everyone who replied. So simple I couldn't see it. :-)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I have similar (not identical) tables in two different db's. I want to
merge all the data from one into the other, and will be dropping the
original. There will be some duplication of keys so some rows will
need to be ignored. Problem is these are on two different servers on
different
OK, I have all my data on the new machine in a SQL file with 664,000
insert statements. But when I try to run it like this from the mysql
command line:
source filename.sql;
It starts to work, but after a few seconds the server freezes up with
too many connections. How do I avoid this?
I have a column where I need to replace all instances of the text US-
Complete (contained within a long sentence) with US Complete.
There are probably 50 or 100 of them. I'm really scared to do it
since I can't risk screwing up that column - what's the correct syntax?
--
MySQL General
...if I add a column to a table with 40,000,000 records, will it
cause that table to hang for any significant amount of time, and
prevent other transactions? It's a MyISAM table and I was going to
add a varchar(20) column, NULL. It's a very busy table, constant
searches and inserts.
--
Many thanks to all who took the time to reply. :) :)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
need to dump it in CSV format and zip the file.
This is not on my server, and it's in production, so I don't want to
risk testing different methods and possibly hanging up their server
for a period of time, so I wanted to
in the database software. Not having any indices on your temp
table will help too.
HTH,
Dan
On 5/2/07, Brian Dunning [EMAIL PROTECTED] wrote:
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
need to dump it in CSV format and zip the file.
This is not on my server, and it's
Thanks to everyone who answered, think I've got enough info now to
handle it. :)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I host at Rackspace, and one thing that their monitoring service does
NOT catch is a problem when too many connections hit MySQL, and for
some reason it remains hung up until the service is manually
restarted. In the meantime, this is happening to us once or twice a
month. It gets hammered
I can't get MySQL 5 to like this query. Can you tell what I'm trying
to do, and is there a problem with my formatting?
select account_id,picture_id from pictures where account_id in
(select account_id from accounts order by rand() limit 20)
order by rand();
--
MySQL General Mailing List
For
: if I make it a single query and try to do a 'group
by' it always gives the same picture for each person.
On Jan 16, 2007, at 11:55 AM, Rhino wrote:
- Original Message - From: Brian Dunning
[EMAIL PROTECTED]
To: mysql mysql@lists.mysql.com
Sent: Tuesday, January 16, 2007 2:23 PM
On Jan 7, 2007, at 4:23 PM, TK wrote:
In short, the original inventors of the GIF format (CompuServe,
1987) have always defined the pronunciation to be like JIF. So,
that has always been the correct pronunciation.
Sure, so I'll start pronouncing graphics as jraphics.
--
MySQL General
This is the query that's killing me in the slow query log, usually
taking around 20 seconds:
select count(ip) as counted,stamp from ip_addr where stamp=NOW()-
interval 14 day and source='sometext' group by stamp order by stamp
desc;
Here is the table:
CREATE TABLE `ip_addr` (
`ip`
My friend says my sam and in-NOD-b. I want to kill him every
time. He also says jif instead of gif.
On Jan 7, 2007, at 7:16 AM, Michael Stearne wrote:
I just say
My, I, Sam and inno, d, b
Michael
-Original Message-
From: js [EMAIL PROTECTED]
Date: Mon, 8 Jan 2007 00:09:15
The table structure is in my original post.
On Dec 14, 2006, at 11:49 AM, Rolando Edwards wrote:
Just change your table name and you are all set.
If you have the table structure, send it
and I'll make the query if you want.
--
MySQL General Mailing List
For list archives:
+++
| id | color |
+++
| 1 | red|
| 2 | blue |
| 3 | red|
| 4 | yellow |
| 5 | yellow |
| 6 | blue |
| .. | ...|
I'm trying to select 5 random records, but no more than 1 of any
given color. According to the notes in the documentation and to my
], Brian Dunning
[EMAIL PROTECTED], mysql@lists.mysql.com
Sent: Thursday, December 14, 2006 12:17:30 PM GMT-0500 US/Eastern
Subject: Re: Workaround for distinct?
On Thu, 14 Dec 2006, Chris Boget wrote:
=I tried this out this morning on MySQL 5.
= It works. Please try this in MySQL 4 and see
I have many sites accessing one MySQL server, all using the same IP,
username, password via a PHP $connection = mysql_connect($ip,$user,
$pass);.
The server is extremely busy, hammered 24x7, but almost all traffic
is coming from one very short script. About once a day we're getting
No
I'm trying to delete all but the newest n records.
DELETE FROM tablename ORDER BY creation DESC LIMIT=n
This does the opposite of what I want. Is there some way to tell it
to start the delete after n and delete all the remaining records?
--
MySQL General Mailing List
For list archives:
The offset is what I was thinking of - that would be the simplest -
but as far as I can tell, delete doesn't support the offset. It's not
documented, and it gives me an error when I try it. I was hoping to
avoid two queries but it sounds like that's what I might have to do.
On Oct 4,
__
-- Original Message --
FROM: Brian Dunning [EMAIL PROTECTED]
TO:mysql@lists.mysql.com
DATE: Wed, 4 Oct 2006 08:49:48 -0700
SUBJECT: Re: Deleting, skip the first n records
The offset is what I was thinking of - that would be the simplest
I have a 17,000,000 record table that I'm trying to duplicate in
order to make some changes and improvements, then I'll rename it and
drop the original table. So I need this duplicate to be a live table
in the same database as the original.
I tried the copy table function in the Operations
Thanks Chris, this sounds great but when I read about mysqlhotcopy I
didn't see a way to make it create a live table that's open within
the same database, it seems to want only to create a separate backup
file in some directory.
On Sep 27, 2006, at 6:10 PM, Wagner, Chris (GEAE, CBTS)
copying */
- RENAME TABLE newtable TO newtable_bkup, newtable2 TO newtable
Dan
On 9/27/06, Brian Dunning [EMAIL PROTECTED] wrote:
I have a very busy 14,000,000 record table. I made a new version of
the table, that's more efficient, and now I just need to copy over
the data. Just about anything
phpMyAdmin is giving me the following warning:
PRIMARY and INDEX keys should not both be set for column `referer`
Here is what the table looks like:
CREATE TABLE `myspacemap_visitors_2` (
`creation` timestamp NOT NULL default CURRENT_TIMESTAMP on update
CURRENT_TIMESTAMP,
`lat` double NOT
I have a very busy 14,000,000 record table. I made a new version of
the table, that's more efficient, and now I just need to copy over
the data. Just about anything I try swamps the machine and locks up
MySQL with too many connections because it's so damn busy. Can
anyone suggest the most
I have a need to insert a record only when the combination of 3
fields is unique. I do this by having an index with all 3 fields, and
doing an INSERT IGNORE. This works fine.
Here's the element I can't figure out how to add: When there is a pre-
existing record, I want to update two of its
, where you're copying into a duplicate
of your new table structure.
- CREATE TABLE newtable2 LIKE newtable
- INSERT INTO newtable2 SELECT * from oldtable /* or however you're
copying */
- RENAME TABLE newtable TO newtable_bkup, newtable2 TO newtable
Dan
On 9/27/06, Brian Dunning [EMAIL
Thanks Dan, I believe that's exactly what I was looking for. Thanks
for not saying RTFM even though it clearly applies. :)
On Sep 27, 2006, at 12:57 PM, Dan Julson wrote:
Brian,
Look at the ON DUPLICATE KEY UPDATE syntax within the INSERT SYNTAX
of the
Docs. That should give you what
Before I do this, I just wanted to check with you all to see if this
is the correct command:
/etc/rc.d/init.d/mysqld restart --log-slow-queries
If so, where exactly will I find the slow query log?
Will the slow query log be turned off by default next time I restart it?
--
MySQL General
We need a guy for some hourly PHP/MySQL work. Large project to start
with, plenty of incremental stuff down the road. Looking for someone
expert, with a flexible schedule who can make hours available when we
need them. Your regular hourly rate.
Also - only interested in someone local to
)) from table where ...
Steve Musumeche
CIO, Internet Retail Connection
[EMAIL PROTECTED]
Brian Dunning wrote:
Lat lon are two different fields. Either can be duplicated, but
not both.
On Sep 12, 2006, at 12:33 PM, Steve Musumeche wrote:
Select DISTINCT(lat_long_field) from table where
Never mind, I figured it out:
select distinct(concat(lat,lon)), lat, lon where
On Sep 13, 2006, at 6:57 AM, Brian Dunning wrote:
But if I do this, how do I still get lat and lon as two different
fields? This finds the right record set, but it returns both fields
concatenated
Hi Agrapin -
This sounds great. Could you please post some of your timber products
here to the list? Many of us are really looking for a break from this
boring MySQL stuff. Thanks, and our kind regards to you too.
- Brian
On Sep 11, 2006, at 7:57 PM, Agrapin S.A. - Timber Industry and
I'm searching a database of geopoints, and when two records have the
same latitude and longitude, I only want to return one of them -
basically just find all the unique locations. How do you set up a
select like this? Thanks...
--
MySQL General Mailing List
For list archives:
Many different records will be returned though, I just don't want any
dupes where both lat/lon is the same.
:)
On Sep 12, 2006, at 12:20 PM, Hiep Nguyen wrote:
select * from table where . limit 1
that would do it if you don't care which one it returns
JC
On Tue, 12 Sep 2006, Brian
Lat lon are two different fields. Either can be duplicated, but not
both.
On Sep 12, 2006, at 12:33 PM, Steve Musumeche wrote:
Select DISTINCT(lat_long_field) from table where...
Steve Musumeche
CIO, Internet Retail Connection
[EMAIL PROTECTED]
Brian Dunning wrote:
Many different
I'm tracking site visitors with a database - one field is the IP
address, the other is the page they've visited. I don't want any
duplicates of this combination. Is there a MySQL statement similar to
INSERT IGNORE but will ignore only if the combination is duplicated?
--
MySQL General
My understanding is that SCSI has a faster transfer rate, for
transferring large files. A busy database needs really fast access,
for making numerous fast calls all over the disk. Two different,
unrelated things.
I am more than willing to be called Wrong, slapped, and cast from a
bridge.
I have a really simple two-column database:
domain_name (primary key)
timestamp
I'm trying to keep track of the referrer of every visit to a web
site, and I'm looking for a single SQL statement (since my ISP limits
the total number of calls I can make in a day) that will either
insert a
I have a bunch of databases - some are really big, 2GB - on a number
of different accounts at PowWeb. I am buying a Rackspace server and
want to move everything over -- hopefully all in one night. Can
anyone suggest the best way to do this? Would it be to use the Export
command in
I'm logging IP addresses and also searching for existing ones. Does
anyone happen to know which is more resource intensive: storing the
IP's as integers using INET_NTOA() and INET_ATON() to convert them at
each query; or simply storing the IP as a varchar? In each case the
IP field would
On Jan 5, 2006, at 9:38 PM, Michael Stassen wrote:
DATEDIFF was added in 4.1.1. What version of mysql do you have?
Thanks Michael - that was indeed the problem. Some ancient-ass
version that's been on my development server for who knows how many
years, in accordance with the If it ain't
I have a DATE field that includes a date sometime within the past 30
days. I'm trying to show the number of days until the expiration
date, which is 30 days in the future from the date in the field. I've
tried a bunch of permutations of something like this:
select (30 - SUBDATE(CURDATE() -
Thanks Peter, that appears to be exactly what I'm looking for, but it
still gives an error and I've been through it with a fine-toothed
comb, tried different versions, parens, etc. Here is the exact SQL
statement I'm using, with your suggestion:
select
accounts.username,
OK, I'm bad - I have curly quotes in my db that I failed to eliminate
prior to the insert. Now that they're in there, is there a way to
replace them with straight quotes? Everything I try fails to find them.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To
I'm using the following to find the nearest 10 records by proximity
to the ZIP code $zip. The problem is that it only returns one record
per ZIP code, even when there are multiple records with the same ZIP:
(Note this uses a temp table, but I already double-checked that all
the desired
You're exactly right, and that solved it! Thank you.
On Nov 22, 2005, at 7:49 AM, [EMAIL PROTECTED] wrote:
Brian Dunning [EMAIL PROTECTED] wrote on 11/22/2005 10:43:13
AM:
I'm using the following to find the nearest 10 records by proximity
to the ZIP code $zip. The problem
On Nov 6, 2005, at 6:05 PM, Jeremy Cole wrote:
DELETE FROM tbl ORDER BY ts DESC LIMIT 100, 99;
That's exactly needed: I was not aware of the OFFSET option. So I
tried this, which appears correct to me:
DELETE FROM table
WHERE field='somevalue'
ORDER BY timestamp DESC
LIMIT
On Nov 7, 2005, at 11:04 AM, Scott Noyes wrote:
delete from x where ID not in (select ID from x order by timestamp
desc limit
100);
It's a good suggestion, I'm just shying away from it because it seems
more resource intensive than using an offset, and my ISP is super
anal about
Interesting thought. I just tried it with and even 999 - same
error! I'm sure that 999 records is not too large for MySQL... :) :)
On Nov 7, 2005, at 4:52 PM, Joseph Cochran wrote:
The quoted line is 99 but you're using , which I
presume is too big for the system to
The problem is the most recent 100 records won't be sequential. There
are records with many different identifiers, in random order mixed
with other records that I don't want deleted, and each time I do this
I'm going to be limiting each subset of identified records to only
the most recent
On Nov 6, 2005, at 6:05 PM, Jeremy Cole wrote:
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104
Yahoo employs MySQL geeks? Always wondered what db that big index
runs on... :)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
This simple find is taking 4 to 7 seconds. Way too long!! (This is a
geotargeting query using the database from IP2location.)
select lat,lon from geocodes where ipFROM=1173020467 and
ipTO=1173020467
The database looks like this (how IP2location recommends):
CREATE TABLE `geocodes` (
I'm trying to find a list of customers including a count of all their
invoices, but it's not including customers who have no invoices - and
it should. What's broken?
SELECT customers.company, count(invoices.id) as invcount
FROM customers, invoices
WHERE customers.id= invoices.customer_id
Thanks very much to all of you!
Obviously I need to learn more about joins. Appreciate the kick in
the pants.
:)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
If I say:
SELECT *
FROM tablename
ORDER BY rand()
LIMIT 50
Will that give me the first 50 records from the table and randomize
their order, or will it give me 50 records randomly from throughout
the entire table?
--
MySQL General Mailing List
For list archives:
On Nov 2, 2005, at 6:03 PM, Matt Babineau wrote:
It will do your whole table. Then give you 50 records.
But there won't be any duplicates, right? It won't include the same
records more than once. I know that sounds stupid, just trying to
debug an odd problem.
--
MySQL General Mailing
I got it to work - turns out it was a stupid typo on my part (hate it
when that happens - and hate wasting the list's time even more!). How
long does this temporary table persist for - just the execution of
the one page, or will it live on the server (using resources) until I
explicitly
That's awesome! I love it. Even though it didn't include me... :(
How are you doing the geotargeting?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
If I say this, I get all my data:
?php
$sql = select * from myTable;;
$result = mysql_query($sql);
?
But if I say this, I get no results at all:
?php
$sql = create temporary table xxx select * from myTable;;
$result1 = mysql_query($sql);
$sql = select * from xxx;
$result2 = mysql_query($sql);
I'm in the midst of the age-old problem of finding the right SQL to
make a Dealer Locator by ZIP code. Like Steffan's posts from a couple
weeks ago, I have the ZIP code data in one table with latitudes and
longitudes, and the table of Dealers with ZIP codes in a separate table.
While
How do I append one table's contents to another? Both have identical
structure. Problem is I don't have shell access, only phpAdmin or a
PHP file I write upload myself.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
INSERT into tbl1 SELECT * FROM tbl2
Thanks to both of you, but this is not working. Since one of the
fields is a primary key that's duplicated in both tables (both tables
have records numbered 1, 2, 3...), it won't allow the duplicate
entries. Fortunately I do not need those primary key
On Oct 11, 2005, at 8:24 AM, Jose Miguel PĂ©rez wrote:
INSERT INTO table1 (field1, field2)
SELECT field1, field FROM table2
Jose's solution worked perfectly. Thanks everyone, sorry for being so
dense today. :)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To
I got a 12-hour invoice from a consultant who was tasked to do the
following:
- Install a Red Hat machine from absolute scratch for PHP/MySQL/Apache
- Copy over some MySQL databases
- Have mod_rewrite working via htaccess, and have wildcard DNS
I realize there are a billion different
My web site has always worked, I didn't change anything, so whatever
is causing this error happened all by itself:
#1017 - Can't find file: './kessler/products.frm' (errno: 13)
In phpMyAdmin, it shows the tables are in use and there's no way to
execute a repair or anything. I don't know
of this machine?
do you have administrator access?
does kessler/products.frm still exist? [likely: /var/lib/mysql/
kessler/products.frm]
On Sat, 3 Sep 2005, Brian Dunning wrote:
My web site has always worked, I didn't change anything, so
whatever is causing this error happened all by itself:
#1017
On Sep 3, 2005, at 1:43 PM, Jason Pyeron wrote:
google pulls up many instances of this error from several
'different' servers, you might lookinto the software.
http://www.google.com/search?num=50hl=enlr=lang_ensafe=offq=%
27kessler%2Fproducts.frm%27btnG=Searchlr=
The reason for that is a
On Sep 3, 2005, at 1:43 PM, Jason Pyeron wrote:
it could be that the permissions are wrong, too.
The permissions for all the files in there are -rwxrwxr-x ...I don't
know if that's what they should be or not.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To
FIXED...
I did a chmod +rwx on the directory, and now all is well.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
On Sep 3, 2005, at 1:43 PM, Jason Pyeron wrote:
google pulls up many instances of this error from several
'different' servers, you might lookinto the software.
http://www.google.com/search?num=50hl=enlr=lang_ensafe=offq=%
27kessler%2Fproducts.frm%27btnG=Searchlr=
The reason for that is a
1 - 100 of 130 matches
Mail list logo