Re: [GENERAL] How to define the limit length for numeric type?

2017-03-13 Thread vod vos
Sorry for taking varchar(4) as an example, What I want urgently is how to make 
a constraint of fixed length of a numeric data, 
that you can only input data like 23.45,  and if you input the data like 2.45, 
23.4356, 233.45, you will get a warning 
message from postgresql.

I think expr will do the job, but are there any simpler ways to do it in 
postgresql?


  On 星期日, 12 三月 2017 14:28:53 -0700 rob stone  wrote 
 
 > Hello, 
 >  
 > On Sat, 2017-03-11 at 22:14 -0800, vod vos wrote: 
 > > Hi everyone, 
 > >  
 > > How to define the exact limit length of numeric type? For example,  
 > >  
 > > CREATE TABLE test  (id serial, goose numeric(4,1)); 
 > >  
 > > 300.2 and 30.2 can be inserted into COLUMN goose, but I want 30.2 or 
 > > 3.2 can not be inserted, how to do this? 
 > >  
 > > Thank you. 
 > >  
 > >  
 > >  
 >  
 >  
 > Assuming that column goose may only contain values ranging from 100.0 
 > to 999.9, then a check constraint along the lines of:- 
 >  
 > goose > 99.9 and < 1000 
 >  
 > should do the trick. 
 >  
 > HTH, 
 > Rob 
 >  
 >  
 > --  
 > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) 
 > To make changes to your subscription: 
 > http://www.postgresql.org/mailpref/pgsql-general 
 >



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to define the limit length for numeric type?

2017-03-12 Thread vod vos
So there is  no other simpler method for checking that? like varchar(4), only 4 
char can be input?

would using regexp cost more CPU or memory resources?


  On 星期六, 11 三月 2017 23:21:16 -0800 Charles Clavadetscher 
 wrote  
 > Hello 
 >  
 > > -Original Message- 
 > > From: pgsql-general-ow...@postgresql.org 
 > > [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of vod vos 
 > > Sent: Sonntag, 12. März 2017 08:01 
 > > To: Pavel Stehule  
 > > Cc: pgsql-general  
 > > Subject: Re: [GENERAL] How to define the limit length for numeric type? 
 > >  
 > > Maybe CHECK (goose >= 100 AND goose <= -100)  works better, But if : 
 > >  
 > > INSERT INTO test VALUES (1, 59.2); 
 > > INSERT INTO test VALUES (1, 59.24); 
 > > INSERT INTO test VALUES (1, 59.26); 
 > > INSERT INTO test VALUES (1, 59.2678); 
 > >  
 > > The INSERT action still can be done. What I want is just how to limit the 
 > > length of the insert value, you can just 
 > > type format like 59.22, only four digits length. 
 >  
 > You may change (or extend) the CHECK condition using regexp: 
 >  
 > SELECT 59.2::TEXT ~ '^[0-9]+\.[0-9]{2}
; 
 >  ?column? 
 > -- 
 >  f 
 >  
 > SELECT 59.24::TEXT ~ '^[0-9]+\.[0-9]{2}
; 
 >  ?column? 
 > -- 
 >  t 
 >  
 > SELECT 59.26::TEXT ~ '^[0-9]+\.[0-9]{2}
; 
 >  ?column? 
 > -- 
 >  t 
 >  
 > SELECT 59.2678::TEXT ~ '^[0-9]+\.[0-9]{2}
; 
 >  ?column? 
 > -- 
 >  f 
 >  
 > Of course you can change the part left of the dot to also be limited to 2 
 > digits. 
 >  
 > Regards 
 > Charles 
 >  
 > >  
 > > Thank you. 
 > >  
 > >  
 > >   On 星期六, 11 三月 2017 22:25:19 -0800 Pavel Stehule 
 > >  wrote   >  >  > 2017-03-12 
 > > 7:14 GMT+01:00 vod vos : 
 > >  > 
 > >  >  Hi everyone, 
 > >  > 
 > >  >  How to define the exact limit length of numeric type? For example,  >  
 > > >  CREATE TABLE test  (id serial, goose 
 > > numeric(4,1));  >  >  300.2 and 30.2 can be inserted into COLUMN goose, 
 > > but I want 30.2 or 3.2 can not be inserted, 
 > > how to do this? 
 > >  > 
 > >  > ostgres=# CREATE TABLE test  (id serial, goose numeric(4,1));CREATE 
 > > TABLETime: 351,066 mspostgres=# insert into 
 > > test values(1,3.2);INSERT 0 1Time: 65,997 mspostgres=# select * from 
 > > test;┌┬───┐│ id │ goose 
 > > │╞╪═══╡│  1 │   3.2 │└┴───┘(1 row)Time: 68,022 ms 
 > >  > Regards 
 > >  > Pavel 
 > >  >  Thank you. 
 > >  > 
 > >  > 
 > >  > 
 > >  >  -- 
 > >  >  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)  >  
 > > To make changes to your subscription: 
 > >  >  http://www.postgresql.org/mailpref/pgsql-general 
 > >  > 
 > >  > 
 > >  
 > >  
 > >  
 > > -- 
 > > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
 > > changes to your subscription: 
 > > http://www.postgresql.org/mailpref/pgsql-general 
 >  
 >  
 >  
 > --  
 > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) 
 > To make changes to your subscription: 
 > http://www.postgresql.org/mailpref/pgsql-general 
 > 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to define the limit length for numeric type?

2017-03-11 Thread vod vos
Maybe CHECK (goose >= 100 AND goose <= -100)  works better, But if :

INSERT INTO test VALUES (1, 59.2);
INSERT INTO test VALUES (1, 59.24);
INSERT INTO test VALUES (1, 59.26);
INSERT INTO test VALUES (1, 59.2678);

The INSERT action still can be done. What I want is just how to limit the 
length of the insert value, you can just type format like 59.22, only four 
digits length.

Thank you.


  On 星期六, 11 三月 2017 22:25:19 -0800 Pavel Stehule  
wrote  
 > 
 > 
 > 2017-03-12 7:14 GMT+01:00 vod vos :
 > 
 >  Hi everyone,
 >  
 >  How to define the exact limit length of numeric type? For example,
 >  
 >  CREATE TABLE test  (id serial, goose numeric(4,1));
 >  
 >  300.2 and 30.2 can be inserted into COLUMN goose, but I want 30.2 or 3.2 
 > can not be inserted, how to do this?
 > 
 > ostgres=# CREATE TABLE test  (id serial, goose numeric(4,1));CREATE 
 > TABLETime: 351,066 mspostgres=# insert into test values(1,3.2);INSERT 0 
 > 1Time: 65,997 mspostgres=# select * from test;┌┬───┐│ id │ goose 
 > │╞╪═══╡│  1 │   3.2 │└┴───┘(1 row)Time: 68,022 ms
 > Regards
 > Pavel  
 >  Thank you.
 >  
 >  
 >  
 >  --
 >  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 >  To make changes to your subscription:
 >  http://www.postgresql.org/mailpref/pgsql-general
 >  
 >



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to define the limit length for numeric type?

2017-03-11 Thread vod vos

Hi everyone,

How to define the exact limit length of numeric type? For example, 

CREATE TABLE test  (id serial, goose numeric(4,1));

300.2 and 30.2 can be inserted into COLUMN goose, but I want 30.2 or 3.2 can 
not be inserted, how to do this?

Thank you.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY: row is too big

2017-01-05 Thread vod vos
I finally figured it out as follows:



1. modified the corresponding data type of the columns to the csv file



2. if null values existed, defined the data type to varchar. The null values 
cause problem too.



so 1100 culumns work well now. 



This problem wasted me three days. I have lots of csv data to COPY.









 On 星期三, 04 一月 2017 08:39:42 -0800 Adrian Klaver 
 wrote 




On 01/04/2017 08:32 AM, Steve Crawford wrote: 

> ... 

> 

> Numeric is expensive type - try to use float instead, maybe double. 

> 

> 

> If I am following the OP correctly the table itself has all the 

> columns declared as varchar. The data in the CSV file is a mix of 

> text, date and numeric, presumably cast to text on entry into the table. 

> 

> 

> But a CSV *is* purely text - no casting to text is needed. Conversion is 

> only needed when the strings in the CSV are text representations of 

> *non*-text data. 

 

Yeah, muddled thinking. 

 

> 

> I'm guessing that the OP is using all text fields to deal with possibly 

> flawed input data and then validating and migrating the data in 

> subsequent steps. In that case, an ETL solution may be a better 

> approach. Many options, both open- closed- and hybrid-source exist. 

> 

> Cheers, 

> Steve 

 

 

-- 

Adrian Klaver 

adrian.kla...@aklaver.com 

 

 

-- 

Sent via pgsql-general mailing list (pgsql-general@postgresql.org) 

To make changes to your subscription: 

http://www.postgresql.org/mailpref/pgsql-general 








Re: [GENERAL] COPY: row is too big

2017-01-04 Thread vod vos
OK, maybe the final solution is to split it into half.


 On 星期三, 04 一月 2017 06:53:31 -0800 Adrian Klaver 
<adrian.kla...@aklaver.com> wrote 




On 01/04/2017 05:00 AM, vod vos wrote: 

> Now I am confused about I can create 1100 columns in a table in 

> postgresql, but I can't copy 1100 values into the table. And I really 

 

As pointed out previously: 

 

https://www.postgresql.org/about/ 

Maximum Columns per Table250 - 1600 depending on column types 

 

That being dependent on both the number of columns and the actual data 

in the columns. Empty columns are not the problem, it is when you start 

filling them that you get the error. 

 

> dont want to split the csv file to pieces to avoid mistakes after this 

> action. 

> 

> I create a table with 1100 columns with data type of varchar, and hope 

> the COPY command will auto transfer the csv data that contains some 

 

I am afraid the solution is going to require more then hope. You are 

going to need to break the data up. I suspect that just splitting it 

into half would do the trick. So: 

 

Table 1 

column 1 for a primary key(assuming first column of your present data) 

columns 2-550 

 

Table 2 

column 1 for a primary key(assuming first column of your present data) 

columns 551-1100 

 

Using the program I mentioned previously: 

 

http://csvkit.readthedocs.io/en/1.0.1/scripts/csvcut.html 

 

That translates into: 

 

csvcut -c 1,2-550 your_big.csv > table_1.csv 

 

csvcut -c 1,551-1100 your_big.csv > table_2.csv 

 

 

> character and date, most of which are numeric. 

 

Is this a different data set? 

Previously you said: 

"The most of the data type are text or varhcar, ..." 

> 

> I use the command: COPY rius FROM "/var/www/test/test.csv" WITH 

> DELIMITER ';' ; 

> 

> Then it shows: 

> 

> ERROR: row is too big: size 11808, maximum size 8160 

> 

> 

> 

> 

> 

> 

> 

>  On 星期二, 03 一月 2017 05:24:18 -0800 *John McKown 

> <john.archie.mck...@gmail.com>* wrote  

> 

> On Mon, Jan 2, 2017 at 2:57 PM, Rob Sargent <robjsarg...@gmail.com 

> <mailto:robjsarg...@gmail.com>>wrote: 

> 

> Perhaps this is your opportunity to correct someone else's 

> mistake. You need to show the table definition to convince us 

> that it cannot be improved. That it may be hard work really 

> doesn't mean it's not the right path. 

> 

> 

> ​This may not be possible. The data might be coming in from an 

> external source. I imagine you've run into the old "well, _we_ don't 

> have any problems, so it must be on your end!" scenario. 

> 

> Example: we receive CSV files from an external source. These files 

> are _supposed_ to be validated. But we have often received files 

> where NOT NULL fields have "nothing" in them them. E.g. a customer 

> bill which has _everything_ in it _except_ the customer number (or 

> an invalid one such as "123{"); or missing some other vital piece of 

> information. 

> 

> In this particular case, the OP might want to do what we did in a 

> similar case. We had way too many columns in a table. The 

> performance was horrible. We did an analysis and, as usual, the 

> majority of the selects were for a subset of the columns, about 15% 

> of the total. We "split" the table into the "high use" columns table 

> & the "low use" columns table. We then used triggers to make sure 

> that if we added a new / deleted an old row from one table, the 

> corresponding row in the other was created / deleted. 

> 

> 

> 

> 

> 

> -- 

> Sent via pgsql-general mailing list 

> (pgsql-general@postgresql.org <mailto:pgsql-general@postgresql.org>;) 

> To make changes to your subscription: 

> http://www.postgresql.org/mailpref/pgsql-general 

> 

> 

> 

> 

> -- 

> There’s no obfuscated Perl contest because it’s pointless. 

> 

> —Jeff Polk 

> 

> Maranatha! <>< 

> John McKown 

> 

> 

 

 

-- 

Adrian Klaver 

adrian.kla...@aklaver.com 








Re: [GENERAL] COPY: row is too big

2017-01-04 Thread vod vos
Now I am confused about I can create 1100 columns in a table in postgresql, but 
I can't copy 1100 values into the table. And I really dont want to split the 
csv file to pieces to avoid mistakes after this action.



I create a table with 1100 columns with data type of varchar, and hope the COPY 
command will auto transfer the csv data that contains some character and date, 
most of which are numeric.

 

I use the command: COPY rius FROM "/var/www/test/test.csv" WITH DELIMITER ';' ;



Then it shows: 



ERROR:  row is too big: size 11808, maximum size 8160















 On 星期二, 03 一月 2017 05:24:18 -0800 John McKown 
 wrote 




On Mon, Jan 2, 2017 at 2:57 PM, Rob Sargent  wrote:

Perhaps this is your opportunity to correct someone else's mistake. You need to 
show the table definition to convince us that it cannot be improved. That it 
may be hard work really doesn't mean it's not the right path.




​This may not be possible. The data might be coming in from an external source. 
I imagine you've run into the old "well, _we_ don't have any problems, so it 
must be on your end!" scenario. 



Example: we receive CSV files from an external source. These files are 
_supposed_ to be validated. But we have often received files where NOT NULL 
fields have "nothing" in them them. E.g. a customer bill which has _everything_ 
in it _except_ the customer number (or an invalid one such as "123{"); or 
missing some other vital piece of information.



In this particular case, the OP might want to do what we did in a similar case. 
We had way too many columns in a table. The performance was horrible. We did an 
analysis and, as usual, the majority of the selects were for a subset of the 
columns, about 15% of the total. We "split" the table into the "high use" 
columns table & the "low use" columns table. We then used triggers to make 
sure that if we added a new / deleted an old row from one table, the 
corresponding row in the other was created / deleted.




 






--

 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)

 To make changes to your subscription:

 http://www.postgresql.org/mailpref/pgsql-general










-- 

There’s no obfuscated Perl contest because it’s pointless.



—Jeff Polk




Maranatha! <><

John McKown





































Re: [GENERAL] COPY: row is too big

2017-01-02 Thread vod vos
You know, the csv file was exported from other database of a machine, so I 
really dont want to break it for it is a hard work. Every csv file contains 
headers and values. If I redesign the table, then I have to cut all the csv 
files into pieces one by one. 





 On 星期一, 02 一月 2017 08:21:29 -0800 Tom Lane <t...@sss.pgh.pa.us> 
wrote ----




vod vos <vod...@zoho.com> writes: 

> When I copy data from csv file, a very long values for many columns (about 
1100 columns). The errors appears: 

> ERROR: row is too big: size 11808, maximum size 8160 

 

You need to rethink your table schema so you have fewer columns. 

Perhaps you can combine some of them into arrays, for example. 

JSON might be a useful option, too. 

 

regards, tom lane 

 

 

-- 

Sent via pgsql-general mailing list (pgsql-general@postgresql.org) 

To make changes to your subscription: 

http://www.postgresql.org/mailpref/pgsql-general 








Re: [GENERAL] COPY: row is too big

2017-01-02 Thread vod vos
The most of the data type are text or varhcar, and I use:



COPY rius FROM "/var/www/test/aa.csv" WITH DELIMITER ';' ;



And some the values in the csv file contain nulls, do this null values matter? 



Thanks.




 On 星期一, 02 一月 2017 03:11:14 -0800 vod vos <vod...@zoho.com> wrote 





Hi everyone,



My postgresql is 9.61.



When I copy data from csv file, a very long values for many columns (about 1100 
columns). The errors appears:





ERROR:  row is too big: size 11808, maximum size 8160CONTEXT:  



COPY rius, line 2



rius is the table.



I have searched the mailing list, but seems no solutions founded.



Thanks.









[GENERAL] COPY: row is too big

2017-01-02 Thread vod vos
Hi everyone,



My postgresql is 9.61.



When I copy data from csv file, a very long values for many columns (about 1100 
columns). The errors appears:





ERROR:  row is too big: size 11808, maximum size 8160CONTEXT:  



COPY rius, line 2



rius is the table.



I have searched the mailing list, but seems no solutions founded.



Thanks.