On Mon, Feb 10, 2014 at 8:45 AM, Mark Wong wrote:
> Hello everybody,
>
> I was wondering if anyone had any experiences they can share when
> designing the time dimension for a star schema and the like. I'm
> curious about how well it would work to use a timestamp for the
> attribute key, as oppos
Mark Wong writes:
> On Mon, Feb 10, 2014 at 9:20 AM, CS DBA wrote:
>> In the case of this being a timestamp I suspect the performance would
>> take a hit, depending on the size of your fact table and the
>> scope/volume of your DSS queries this could easily be a show stopper
>> based on the assum
On Mon, Feb 10, 2014 at 9:20 AM, CS DBA wrote:
> I've done a lot of DSS architecture. A couple of thoughts:
>
> - in most cases the ETL process figures out the time id's as part of the
> preparation and then does bulk loads into the fact tables
> I would be very concerned about performance o
On Tue, Feb 11, 2014 at 12:20 AM, Marti Raudsepp wrote:
> This is on Ubuntu 13.10 (kernel 3.11) with XFS (mount ed with noatime,
> no other customizations).
I managed to track this down; XFS doesn't allow using O_DIRECT for
writes smaller than the filesystem's sector size (probably same on
other
Hi list,
I'm in the middle of setting up a new machine and there's something
odd in pg_test_fsync output. Does anyone have ideas why open_sync
tests would fail in the middle?:
4 * 4kB open_sync writes 89.322 ops/sec 11195 usecs/op
8 * 2kB open_sync writes write
On Feb 9, 2014, at 2:48 PM, John Anderson wrote:
> What I'm wondering is if there is a more denormalized view of this type of
> data that would make those of types of queries quicker?
That sounds like a materialized view?
Thanks for all the replies. They were all right on. For some unknown
reason, the client_min_messages was set to DEBUG5. Not sure how this
happened but with your help, I now know how to get it back to where it was.
Thanks again for all the quick feedback.
- Peter
--
View this message in cont
I've done a lot of DSS architecture. A couple of thoughts:
- in most cases the ETL process figures out the time id's as part of the
preparation and then does bulk loads into the fact tables
I would be very concerned about performance of a trigger that
fired for every row on the fact table
Somehow your postgres log statements are getting echoed to the front
end. Did you change anything about the postgres (server) configuration file?
On Mon, Feb 10, 2014 at 07:43:33AM -0800, peterlen wrote:
> We are using PostgreSQL 9.3. Something seems to have changed with our psql
> command-line
Hello everybody,
I was wondering if anyone had any experiences they can share when
designing the time dimension for a star schema and the like. I'm
curious about how well it would work to use a timestamp for the
attribute key, as opposed to a surrogate key, and populating the time
dimension with
> This is now my ranked shortlist which I will evaluate further:
> 1. Camelot: http://www.python-camelot.com - PyQt
> 2. Dabo: http://www.dabodev.com - wxPython
> 3. Gui2Py: http://code.google.com/p/gui2py/ - wxPython
> 4. Kiwi: http://www.async.com.br/projects/kiwi - PyGTK
> 5. Sqlkit: http://sqlk
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Il 10/02/2014 16:43, peterlen ha scritto:
> We are using PostgreSQL 9.3. Something seems to have changed with
> our psql command-line output since we first installed it. When I
> run commands at my plsql prompt, I am getting a lot of debug
> statem
2014-02-11 0:43 GMT+09:00 peterlen :
> We are using PostgreSQL 9.3. Something seems to have changed with our psql
> command-line output since we first installed it. When I run commands at my
> plsql prompt, I am getting a lot of debug statements which I was not getting
> before. I am just trying
- Original Message -
> From: peterlen
> To: pgsql-general@postgresql.org
> Cc:
> Sent: Monday, 10 February 2014, 15:43
> Subject: [GENERAL] How to turn off DEBUG statements from psql commends
>
> We are using PostgreSQL 9.3. Something seems to have changed with our psql
> command-li
We are using PostgreSQL 9.3. Something seems to have changed with our psql
command-line output since we first installed it. When I run commands at my
plsql prompt, I am getting a lot of debug statements which I was not getting
before. I am just trying to find out how to tell psql not to display
I can understand it will create duplicates, but it would also allow for
recovery from backups.
If a backup was taken at midnight, and promotion happened at 6am then
having archiving on the slave would allow log replay from the backup.
Log replay from the old master would potentially end up in the
> From: Shaun Thomas
>To: 'bricklen'
>Cc: "pgsql-general@postgresql.org"
>Sent: Friday, 7 February 2014, 22:36
>Subject: Re: [GENERAL] Better Connection Statistics
>
>
>> I don't know any tools off-hand, but you might be able to generate
>> partial statistics from the log files with a descr
James Sewell wrote:
> If it is the the only way that I could achieve what I wanted would be to set
> wal_keep_segments high enough then they will all be archived on promotion?
Even if you set wal_keep_segments high I don't think that the replayed
WAL will be archived.
> I'm still not sure why the
18 matches
Mail list logo