Have you tried changing the block size?
http://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F
On Wed, Jul 13, 2011 at 11:03 AM, Miguel Angel Conte wrote:
> I have the metadata in the same csv.
>
> On Wed, Jul 13, 2011 at 3:00 PM, Kevin Crain wrote
Is there any room for improvement in the data types?
On Wed, Jul 13, 2011 at 11:03 AM, Miguel Angel Conte wrote:
> I have the metadata in the same csv.
>
> On Wed, Jul 13, 2011 at 3:00 PM, Kevin Crain wrote:
>>
>> How are you determining the data types for these columns?
>>
>> On Wed, Jul 13, 20
I have the metadata in the same csv.
On Wed, Jul 13, 2011 at 3:00 PM, Kevin Crain wrote:
> How are you determining the data types for these columns?
>
> On Wed, Jul 13, 2011 at 8:45 AM, Miguel Angel Conte
> wrote:
> > Hi,
> > Thanks for your interest. This app load scv files which change every
How are you determining the data types for these columns?
On Wed, Jul 13, 2011 at 8:45 AM, Miguel Angel Conte wrote:
> Hi,
> Thanks for your interest. This app load scv files which change every day
> (sometimes the columns too). The sizes of these files are in avg 15MB. So,
> We load something li
I can't drop the table. I have to add as many columns as posible and when I
exceed the limit I have to create another table.
I've tried normalizing but then the join's cost is too big. I always need to
use all columns, so getting a all information into a single row it's the
most efficient solution
On Wed, Jul 13, 2011 at 9:45 AM, Miguel Angel Conte wrote:
> Hi,
> Thanks for your interest. This app load scv files which change every day
> (sometimes the columns too). The sizes of these files are in avg 15MB. So,
> We load something like 100MB each day. We tried to find a better solution
> but
On Wed, Jul 13, 2011 at 12:45:45PM -0300, Miguel Angel Conte wrote:
> Hi,
>
> Thanks for your interest. This app load scv files which change every day
> (sometimes the columns too). The sizes of these files are in avg 15MB. So,
> We load something like 100MB each day. We tried to find a better sol
Hi,
Thanks for your interest. This app load scv files which change every day
(sometimes the columns too). The sizes of these files are in avg 15MB. So,
We load something like 100MB each day. We tried to find a better solution
but we couldn't, becouse one of the our requirement is not to use a lot
I still can't imagine why you'd ever need this...could you explain
what this does? I'm just curious now
On Tue, Jul 12, 2011 at 10:55 PM, Kevin Crain wrote:
> This is an unfortunate situation, you shouldn't be required to do
> this, the people generating your requirements need to be more
> in
This is an unfortunate situation, you shouldn't be required to do
this, the people generating your requirements need to be more
informed. I would make damn sure you notify the stakeholders in this
project that the data model is screwed and needs a redesign. I agree
that you should split this tabl
Yes, sure. I mean, I can't change the whole process which creates columns
dynamically.
On Tue, Jul 12, 2011 at 6:12 PM, Reinoud van Leeuwen <
reinou...@n.leeuwen.net> wrote:
> On Tue, Jul 12, 2011 at 03:08:36PM -0300, Miguel Angel Conte wrote:
> > Unfortunately It's an inherited data model and I
On Tue, Jul 12, 2011 at 03:08:36PM -0300, Miguel Angel Conte wrote:
> Unfortunately It's an inherited data model and I can't make any change for
> now...
but by adding columns you *are* making changes to it...
Reinoud
--
__
"Nothing is as subjectiv
On Tue, Jul 12, 2011 at 12:08 PM, Miguel Angel Conte wrote:
> Unfortunately It's an inherited data model and I can't make any change for
> now...
> Thanks for your answer!
when you can change it, look at hstore
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to you
Hi Miguel,
maybe you can split table to two tables with one-to-one connection.
The another way is to create dynamic-attribute-tables which means to store
data in columns, not in rows.
On Tue, Jul 12, 2011 at 7:48 PM, Miguel Angel Conte wrote:
> Hi,
>
> I'm using postgresql 9 and I'd like to know
>
> Hi Ken,
>
> Do you know a good way to get the max row size in a table?
> Or maybe I'll have to get this information from the metadata
>
> thanks!
>
>
> On Tue, Jul 12, 2011 at 3:11 PM, k...@rice.edu wrote:
>
>> On Tue, Jul 12, 2011 at 03:08:36PM -0300, Miguel Angel Conte wrote:
>> > Unfort
On Tue, Jul 12, 2011 at 03:08:36PM -0300, Miguel Angel Conte wrote:
> Unfortunately It's an inherited data model and I can't make any change for
> now...
> Thanks for your answer!
>
> On Tue, Jul 12, 2011 at 2:52 PM, Reinoud van Leeuwen <
> reinou...@n.leeuwen.net> wrote:
>
> > On Tue, Jul 12, 20
Unfortunately It's an inherited data model and I can't make any change for
now...
Thanks for your answer!
On Tue, Jul 12, 2011 at 2:52 PM, Reinoud van Leeuwen <
reinou...@n.leeuwen.net> wrote:
> On Tue, Jul 12, 2011 at 02:48:26PM -0300, Miguel Angel Conte wrote:
>
> > Something like:
> > "If I'm
On Tue, Jul 12, 2011 at 02:48:26PM -0300, Miguel Angel Conte wrote:
> Something like:
> "If I'm not going to exceed the size limit, then I can add a new column"
You want to add columns in your application? Are you sure you have the
right datamodel?
Reinoud
--
__
Hi,
I'm using postgresql 9 and I'd like to know if there is a way to "ask" if
when I'm going to add a column, I'm exceeding the max number of columns.
I've found that the max number of columns is 1600 and It's depends of the
data types.
I've made a test adding 1600 columns using different data ty
19 matches
Mail list logo