On Wed, 2003-11-19 at 15:08, Dan Greene wrote:
> one more idea:
>
> try:
> mysql --skip-column-names --raw < test1 | tar xf -
no output,
mysql --skip-column-names --raw < test1 | more
./test1
>
>
> > -Original Message-
> > From: Denis Mercie
rs
>
> to see if it works
>
>
> > -Original Message-
> > From: Denis Mercier [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, November 19, 2003 2:41 PM
> > To: [EMAIL PROTECTED]
> > Subject: Re: piping blob into shell command (tar)
> >
ng FILE_LOAD().
before:
use test;
select * from test;
after:
mysql --skip-column-names < test1 | more
use test;\nselect * from test;\n
\n's are added?
> > -Original Message-
> > From: Denis Mercier [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, November 19,
On Wed, 2003-11-19 at 14:02, Paul DuBois wrote:
> At 13:55 -0500 11/19/03, Denis Mercier wrote:
> >On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
> >> At 11:03 -0500 11/19/03, Denis Mercier wrote:
> >>
> >> > > >> >i also tried:
> >>
On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
> At 11:03 -0500 11/19/03, Denis Mercier wrote:
>
> > > >> >i also tried:
> >> >> >use my_db;
> >> >> >select * from my_table;
> >> >> >
> >> >>
17:05 -0500 11/18/03, Denis Mercier wrote:
> >On Tue, 2003-11-18 at 16:40, Paul DuBois wrote:
> >> At 16:21 -0500 11/18/03, Denis Mercier wrote:
> >> >here's what im trying to do, i have a tar file in a blob field
> >> >and i'm trying to retr
On Tue, 2003-11-18 at 16:40, Paul DuBois wrote:
> At 16:21 -0500 11/18/03, Denis Mercier wrote:
> >here's what im trying to do, i have a tar file in a blob field
> >and i'm trying to retrieve it and pipe it directly into tar
> >to decompress it, without fir
here's what im trying to do, i have a tar file in a blob field
and i'm trying to retrieve it and pipe it directly into tar
to decompress it, without first writing it to the hard drive,
here's what i've tried so far,
I create a text file called test1:
use my_db;
select * into dumpfile "/usr/lo
try this link http://jeremy.zawodny.com/blog/archives/000796.html
setting avg_row_length at 50 worked for me I tested and got
mytable up to 9GB, (large table with variable size records )
the only reason I could see that it would matter to have an accurate
value for the avg_row_length would
I also had table is full error, today
actually.
mysql> alter table mytable max_rows = 2000 avg_row_length=50;
mysql> show table status like 'mytable' \G
*** 1. row ***
Name: mytable
Type: MyISAM
Row_format: Dynam
hi
I am presently going over the mysql documentation to get familiar with
it,
It runs great on my development server (linux RH7.1 kernel=2.4.2-2
resin application server),
I am in the process of optimizing and testing , I am using blob datatype
in my main table,
I understand why a fixed-size form
11 matches
Mail list logo