Hey,
I guess you know all about PL/R,
the R language extension for postgres .
It is very convenient, though be carefull as sometime it crashed my server.
Cheers,
Rémi-C
2014-07-16 3:42 GMT+02:00 John McKown :
> On Tue, Jul 15, 2014 at 8:46 AM, David G Johnston <
> david.g.johns...@gmail.com> wr
On Tue, Jul 15, 2014 at 8:46 AM, David G Johnston <
david.g.johns...@gmail.com> wrote:
> John McKown wrote
> > I have a table which has some "raw" data in it. By "raw", I mean it is
> > minimally processed from a log file. Every week, I update this table by
> > processing the weekly log using awk
John McKown wrote
> I have a table which has some "raw" data in it. By "raw", I mean it is
> minimally processed from a log file. Every week, I update this table by
> processing the weekly log using awk to create a "psql script" file which
> looks similar to:
>
> COPY rawdata FROM STDIN;
> li
>On 12/13/2013 4:46 AM, rob stone wrote:
>> The only fly in the ointment with this is a rain gauge. If you don't
>> empty it each day the actual rainfall is the difference between
>> readings.
>(somewhat off topic)
>The electronic rain gauges I've seen have all been tip-bucket. they
>measure eac
On 12/13/2013 4:46 AM, rob stone wrote:
The only fly in the ointment with this is a rain gauge. If you don't
empty it each day the actual rainfall is the difference between
readings.
(somewhat off topic)
The electronic rain gauges I've seen have all been tip-bucket. they
measure each 0.01" (o
O
n Thu, 2013-12-12 at 12:45 -0600, Seb wrote:I
'm working on the design of a database for time series data collected
--
Et in Arcadia, ego.
Floripa -- city of Land Rovers and alligators swimming in creeks.
> sampling scheme, but not all. I initially thought it would be a good
> idea to have a
On Fri, Dec 13, 2013 at 12:15 AM, Seb wrote:
> Hi,
>
> I'm working on the design of a database for time series data collected
> by a variety of meteorological sensors. Many sensors share the same
> sampling scheme, but not all. I initially thought it would be a good
> idea to have a table ident
On 09/05/13 17:42, Johann Spies wrote:
> Hallo Julian,
>
> Thanks for your reply.
>
> Firstly, don't worry too much about speed in the design phase,
> there may
> be differences of opinion here, but mine is that even with database
> design the first fundamental layer is the relation
On 08/05/13 21:21, Johann Spies wrote:
> Basically my request is for advice on how to make this database as
> fast as possible with as few instances of duplicated data while
> providing both for the updates on level 0 and value added editing on
> level 1.
>
> Regards
> Johann
Hi Johann.
Firstly,
In general, temporary tables are way faster for writing than normal tables as
they don't generate WAL records.
On Tuesday, February 12, 2013 11:45:22 AM Little, Douglas wrote:
Hi,
Design question.
Does it make a difference for a function to repeatedly update a temp table
verses the perman
Sent: 21 December 2011 22:07
To: Marc Mamin
Cc: Alban Hertroys; pgsql-general@postgresql.org
Subject: Re: [GENERAL] design help for performance
Thank you so much everyone! Introducing table C was indeed my next step
but I was unsure if I was going to be just moving the locking problems from
A
lto:pgsql-general-
> > ow...@postgresql.org] On Behalf Of Alban Hertroys
> > Sent: Mittwoch, 21. Dezember 2011 08:53
> > To: Culley Harrelson
> > Cc: pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] design help for performance
> >
> > On 21 Dec 2011, at
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Alban Hertroys
> Sent: Mittwoch, 21. Dezember 2011 08:53
> To: Culley Harrelson
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GEN
On 21 Dec 2011, at 24:56, Culley Harrelson wrote:
> Several years ago I added table_b_rowcount to table A in order to minimize
> queries on table B. And now, as the application has grown, I am starting to
> having locking problems on table A. Any change to table B requires the that
> table_b_
** **
>
> David J.
>
> ** **
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Misa Simic
> *Sent:* Tuesday, December 20, 2011 7:13 PM
> *To:* Culley Harrelson; pgsql-general@postgresql.org
> *Subject:* Re: [G
Subject: Re: [GENERAL] design help for performance
Hi Culley,
Have you tried to create fk together with index on fk column on table B?
What are results? Would be good if you could send the query and explain
analyze...
Sent from my Windows Phone
_
From: Culley Harrelson
Sent: 21 December
Hi Culley,
Have you tried to create fk together with index on fk column on table B?
What are results? Would be good if you could send the query and explain
analyze...
Sent from my Windows Phone
--
From: Culley Harrelson
Sent: 21 December 2011 00:57
To: pgsql-general
Ivan Sergio Borgonovo wrote:
> On Tue, 27 Oct 2009 09:17:59 +
> Richard Huxton wrote:
>
>> Ivan Sergio Borgonovo wrote:
>>> Hi,
>
>>> I've to generate unique password and associate them with emails.
>>> Association with emails is just to mail the password, email +
>>> password aren't "the pa
On Tue, 27 Oct 2009 09:17:59 +
Richard Huxton wrote:
> Ivan Sergio Borgonovo wrote:
> > Hi,
> > I've to generate unique password and associate them with emails.
> > Association with emails is just to mail the password, email +
> > password aren't "the password", just the password is.
> > So
Ivan Sergio Borgonovo wrote:
> Hi,
>
> I've to generate unique password and associate them with emails.
> Association with emails is just to mail the password, email +
> password aren't "the password", just the password is.
>
> So a bunch of emails may be associated with the same password.
>
> S
On Thu, 2009-10-22 at 03:13 -0700, Mike Christensen wrote:
> It has occurred to me that there might be some advantages of creating
> a separate table called "OrderArchive" which would be used to store
> order states 3 or 4. This would allow me to get rid of an index on
> order state as well as pro
2009/8/18 Ivan Sergio Borgonovo :
> On Tue, 18 Aug 2009 12:38:49 +0200
> Pavel Stehule wrote:
>
>> some unsafe function:
>
> I suspected something similar.
>
> I think many would appreciate if you put these examples here
> http://www.okbob.blogspot.com/2008/06/execute-using-feature-in-postgresql-8
On Tue, 18 Aug 2009 12:38:49 +0200
Pavel Stehule wrote:
> some unsafe function:
I suspected something similar.
I think many would appreciate if you put these examples here
http://www.okbob.blogspot.com/2008/06/execute-using-feature-in-postgresql-84.html
and substitute the int example there with
2009/8/18 Ivan Sergio Borgonovo :
> On Mon, 17 Aug 2009 12:48:21 +0200
> Pavel Stehule wrote:
>
>> Hello
>>
>> I am not sure, if it's possible for you. PostgreSQL 8.4 has EXECUTE
>> USING clause, it is 100% safe.
>
> Sorry I don't get it.
>
> How can I use USING safely when the substitution involv
On Mon, 17 Aug 2009 12:48:21 +0200
Pavel Stehule wrote:
> Hello
>
> I am not sure, if it's possible for you. PostgreSQL 8.4 has EXECUTE
> USING clause, it is 100% safe.
Sorry I don't get it.
How can I use USING safely when the substitution involves a table
name?
The examples I've seen just in
On Mon, Aug 17, 2009 at 12:36:49PM +0200, Ivan Sergio Borgonovo wrote:
> I've several list of items that have to be rendered on a web apps in
> the same way.
[..]
> the nature of the lists and their usage pattern is very different.
> So unless someone come up with a better design I still would like
Hello
I am not sure, if it's possible for you. PostgreSQL 8.4 has EXECUTE
USING clause, it is 100% safe.
Pavel
2009/8/17 Ivan Sergio Borgonovo :
> I've several list of items that have to be rendered on a web apps in
> the same way.
>
> The structure is:
>
> create table items (
> itemid int pri
On Fri, 2009-07-31 at 08:38 -0300, pgsql-general-ow...@postgresql.org
wrote:
> Date: Fri, 31 Jul 2009 12:38:30 +0100
> From: Andre Lopes
> To: pgsql-general@postgresql.org
> Subject: Design Database, 3 degrees of Users.
> Message-ID:
> <18f98e680907310438o764e9bc7hbb6e245d8464...@mail.gmail.com>
>
On Fri, Jul 31, 2009 at 9:47 AM, Rich Shepard wrote:
> On Fri, Jul 31, 2009 at 4:38 AM, Andre Lopes wrote:
>
>> I need to design a Database that will handle 3 degrees of users:
>>
>> Administrators - They can see all the information in the database.
>> Managers - They only can see the information o
Andre Lopes wrote:
I need to design a Database that will handle 3 degrees of users:
Administrators - They can see all the information in the database.
Managers - They only can see the information of his dependants.
Dependants - Theirs action must be aprovet by the managers.
A little more
On Fri, Jul 31, 2009 at 4:38 AM, Andre Lopes wrote:
I need to design a Database that will handle 3 degrees of users:
Administrators - They can see all the information in the database.
Managers - They only can see the information of his dependants.
Dependants - Theirs action must be aprovet by t
Would Veil be useful to you?
http://veil.projects.postgresql.org/curdocs/index.html
On Fri, Jul 31, 2009 at 4:38 AM, Andre Lopes wrote:
> I need to design a Database that will handle 3 degrees of users:
>
>
> Administrators - They can see all the information in the database.
>
> Managers - They o
On Fri, Jul 31, 2009 at 12:38:30PM +0100, Andre Lopes wrote:
> I need to design a Database that will handle 3 degrees of users:
>
> Administrators - They can see all the information in the database.
> Managers - They only can see the information of his dependants.
> Dependants - Theirs action must
Andreas wrote:
> who should own the db objects?
> I once read one should not let postgres or any other superuser own the
> tables and what not.
> Instead one should better create a separate user role with little
> privileges to be the owner.
> I'm not quite sure why this was abvised. Maybe like n
Mike Diehl wrote:
> 1. Create a table for each spreadsheet, using column headings as field
> names.
> Every field would be a char/varchar. We might have a table to track which
> client owns which table. This could amount to 10's of tables being added to
> the db.
Give each client their own
On Wed, Sep 17, 2008 at 11:29 AM, Mike Diehl <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I've got a design question that I need to ask before I go too far down what
> might be the wrong road.
>
> I've got a customer, who has multiple customers, who need to be able to upload
> an excel spreadsheet into
Have you considered one large table with all of the columns from the various
spreadsheets, then a separate view for each customer?
- Original Message
From: Mike Diehl <[EMAIL PROTECTED]>
To: pgsql-general@postgresql.org
Sent: Wednesday, September 17, 2008 12:29:15 PM
Subject: [GENERAL
On Thu, Aug 14, 2008 at 2:55 AM, Craig Ringer
<[EMAIL PROTECTED]> wrote:
> William Temperley wrote:
>> A. Two databases, one for transaction processing and one for
>> modelling. At arbitrary intervals (days/weeks/months) all "good" data
>> will be moved to the modelling database.
>> B. One databas
William Temperley wrote:
> Dear all
>
> I'd really appreciate a little advice here - I'm designing a PG
> database to manage a scientific dataset.
> I've these fairly clear requirements:
>
> 1. Multiple users of varying skill will input data.
> 2. Newly inserted data will be audited and marked go
On Sun, Mar 2, 2008 at 1:54 PM, Swaminathan Saikumar <[EMAIL PROTECTED]> wrote:
> I am building a web app with Postgres, that also uses Drupal with Postgres.
> I am new to all these frameworks.
>
> There is some data that I'll need to cross-reference between the two
> databases.
>
> Can I do a cros
On 10/4/07, Ted Byers <[EMAIL PROTECTED]> wrote:
> --- Michael Glaesemann <[EMAIL PROTECTED]> wrote:
>
> >
> > On Oct 4, 2007, at 9:30 , Ted Byers wrote:
> >
> > > I do not know if PostgreSQL, or any other RDBMS,
> > > includes the ability to call on software such as
> > "R"
> >
> > See PL/R:
> >
>
--- Michael Glaesemann <[EMAIL PROTECTED]> wrote:
>
> On Oct 4, 2007, at 9:30 , Ted Byers wrote:
>
> > I do not know if PostgreSQL, or any other RDBMS,
> > includes the ability to call on software such as
> "R"
>
> See PL/R:
>
> http://www.joeconway.com/plr/
>
Thanks. Good to know.
Ted
--
Ted Byers wrote:
If you really have such a disparity among your series,
then it is a mistake to blend them into a single
table. You really need to spend more time analyzing
what the data means. If one data set is comprised of
the daily close price of a suite of stocks or mutual
funds, then it
On Oct 4, 2007, at 9:30 , Ted Byers wrote:
I do not know if PostgreSQL, or any other RDBMS,
includes the ability to call on software such as "R"
See PL/R:
http://www.joeconway.com/plr/
Michael Glaesemann
grzm seespotcode net
---(end of broadcast)--
--- Andreas Strasser <[EMAIL PROTECTED]>
wrote:
> Hello,
>
> i'm currently designing an application that will
> retrieve economic data
> (mainly time series)from different sources and
> distribute it to clients.
> It is supposed to manage around 20.000 different
> series with differing
> numb
Pavel Stehule schrieb:
2007/10/4, Jorge Godoy <[EMAIL PROTECTED]>:
On Thursday 04 October 2007 06:20:19 Pavel Stehule wrote:
I'd use the same solution that he was going to: normalized table including a
timestamp (with TZ because of daylight saving times...), a column with a FK
to a series table
2007/10/4, Jorge Godoy <[EMAIL PROTECTED]>:
> On Thursday 04 October 2007 06:20:19 Pavel Stehule wrote:
> >
> > I did good experience with 2 variant. PostgreSQL needs 24bytes for
> > head of every row, so isn't too much efective store one field to one
> > row. You can simply do transformation betwe
On Thursday 04 October 2007 06:20:19 Pavel Stehule wrote:
>
> I did good experience with 2 variant. PostgreSQL needs 24bytes for
> head of every row, so isn't too much efective store one field to one
> row. You can simply do transformation between array and table now.
But then you'll make all SQL
2007/10/4, Andreas Strasser <[EMAIL PROTECTED]>:
> Hello,
>
> i'm currently designing an application that will retrieve economic data
> (mainly time series)from different sources and distribute it to clients.
> It is supposed to manage around 20.000 different series with differing
> numbers of obse
gt;
À : Gabriele <[EMAIL PROTECTED]>; pgsql-general@postgresql.org
Envoyé le : Jeudi, 5 Juillet 2007, 17h08mn 46s
Objet : Re: [GENERAL] Design Tool
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Gabriele
> Sent: Tuesday,
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Gabriele
> Sent: Tuesday, July 03, 2007 2:43 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Design Tool
>
> I need a design tool to design my database.
>
> Back in past I use
On 04.07.2007 10:44, Gabriele wrote:
Anyway it doesn't support SQLite.
Casestudio is a script based framework, there is lot of user contributed
stuff. I remember having seen SQLite support somewhere, if not it's not
so hard to add support yourself.
--
Regards,
Hannes Dorbath
--
On 03.07.2007 21:43, Gabriele wrote:
"Free" or "not so costly" license. If i use postgresql is also to save
money, as you might expect. A one hundred dollars software might be my
solution, a one thousand dollars is probably not.
Casestudio.. or Toad Data Modeler, as it is named these days, is a
Ian Harding wrote:
On 5/31/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
Ian Harding wrote:
> tsearch indexes have to reside in the table where the data is, for the
> automagical functions that come with it to work. You can define a
> view that joins the tables, then search each of the index c
On 5/31/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
Ian Harding wrote:
> tsearch indexes have to reside in the table where the data is, for the
> automagical functions that come with it to work. You can define a
> view that joins the tables, then search each of the index columns for
> the val
Ian Harding wrote:
tsearch indexes have to reside in the table where the data is, for the
automagical functions that come with it to work. You can define a
view that joins the tables, then search each of the index columns for
the values you are looking for.
No they don't.
Joshua D. Drake
tsearch indexes have to reside in the table where the data is, for the
automagical functions that come with it to work. You can define a
view that joins the tables, then search each of the index columns for
the values you are looking for.
In my experience, the LIKE searches are fast for relative
Thank you, Michael! I'm looking some examples and doing tests to find the
best search solution.
Best,
On 5/30/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote:
On May 30, 2007, at 13:59 , Gabriel Laet wrote:
> I'm developing an application where basically I need to store cars.
> Every car ha
On May 30, 2007, at 13:59 , Gabriel Laet wrote:
I'm developing an application where basically I need to store cars.
Every car has a Make and Model association. Right now, I have three
tables: MAKE, MODEL (make_id) and CAR (model_id).
1) I'm not sure if I need or not to include "make_id" to the
"Filip Rembiałkowski" <[EMAIL PROTECTED]> writes:
> hmmm. just a general notice:
>
> A customer loyalty program, which expires earned points, not to let
> the customer "win" anything valuable?
> If I were your client, I wouldn't be happy with this.
On the other hand, having the possibility is bet
hmmm. just a general notice:
A customer loyalty program, which expires earned points, not to let
the customer "win" anything valuable?
If I were your client, I wouldn't be happy with this.
2007/3/18, Naz Gassiep <[EMAIL PROTECTED]>:
We are running a customer loyalty program whereby customers
>-Original Message-
>From: [EMAIL PROTECTED]
>[mailto:[EMAIL PROTECTED] On Behalf Of Naz Gassiep
>Sent: zondag 18 maart 2007 14:45
>To: Naz Gassiep
>Cc: pgsql-general@postgresql.org
>Subject: Re: [GENERAL] Design / Implementation problem
>
>Here it is again
Naz
First, there is nothing you can do about the computational load except make
your code as efficient as possible. Get it right and then make it fast.
But there is only so much you can do. If a "calculation" requires an
integer sum and an integer difference, you inevitably have two integer
Naz Gassiep <[EMAIL PROTECTED]> writes:
> that calculating the point
> balance on the fly is not an unfeasibly heavy duty calculation to be done at
> every page view?
One alternative to calculate it everytime is calculating it once a day. If
there are calculations for today, then just read the c
Here it is again with more sensible wrapping:
*** The Scenario ***
We are running a customer loyalty program whereby customers earn points
for purchasing products. Each product has a value of points that are
earned by purchasing it, and a value of points required to redeem it.
In order to prev
William Harazim wrote:
My company is looking to use PostgreSQL in place of our existing system.
We currently house data for multiple clients. All of their data is
stored in a single database in different schemas for each client.
Mosts clients have multiple login accounts, which we maintain in
On Tue, 30 Jan 2001, Jeff wrote:
> I have a design question. Lets say we want to keep track of users and
> their respective snail mail addresses. Each user can have up to 4
> different mailing address. Is it better to have all this information in
> one table.
Only if you have mostly 4 address
> Each user can have up to 4
> different mailing address. Is it better to have all this information in
> one table. Or is it better to have a user table and an address table,
> and have the user id as a foreign key in the address table?
It's even possible (recommended by the books) to have sepa
Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: "Brett W. McCoy" <[EMAIL PROTECTED]>
To: "Jeff" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, January 30, 2001 11:51 AM
Subject: Re: [GENERAL] design
> On Tue, 30 Ja
On Tue, 30 Jan 2001, Jeff wrote:
> I have a design question. Lets say we want to keep track of users and
> their respective snail mail addresses. Each user can have up to 4
> different mailing address. Is it better to have all this information in
> one table. Or is it better to have a user tab
Depends on your needs. Typically I would always break out the addresses
into another table, since it's much more flexible. creating four separate
address fields in the user table will reduce the need to perform joins and
thus perhaps make your schemea a little simpler, and your queries a little
71 matches
Mail list logo