2015-03-18 14:31 GMT-03:00 Vivekanand Joshi :
> So, here is the first taste of success and which gives me the confidence
> that if properly worked out with a good hardware and proper tuning,
> PostgreSQL could be a good replacement.
>
> Out of the 9 reports which needs to be migrated in PostgreSQL
2014-11-02 19:16 GMT-02:00 Mike Wilson :
> Thanks for the information Greg.
>
> Unfortunately modifying the application stack this close to the holiday
> season won’t be an option so I’m left with:
>1) Trying to optimize the settings I have for the query mix I have.
>2) Optimize any long r
2014-10-21 10:57 GMT-02:00 :
>
>
> Hi all,
>
> I'm experimenting with table partitioning though inheritance. I'm testing
> a query as follows:
>
> explain (analyze, buffers)
> select response.id
> from claim.response
> where response.account_id = 4766
> and response.expire_timestamp is null
> and
2014-10-20 21:59 GMT-02:00 Tom Lane :
> Marco Di Cesare writes:
> > We are using a BI tool that generates a query with an unusually large
> number of joins. My understanding is that with this many joins Postgres
> query planner can't possibly use an exhaustive search so it drops into a
> heuristi
2014-10-16 14:04 GMT-03:00 Jeff Janes :
> On Thu, Oct 16, 2014 at 5:35 AM, Дмитрий Шалашов
> wrote:
>
>> Hi,
>>
>> lets imagine that we have some table, partitioned by timestamp field, and
>> we query it with SELECT with ordering by that field (DESC for example),
>> with some modest limit.
>> Let
2014-10-06 11:54 GMT-03:00 Emi Lu :
> Hello List,
>
> May I know will cause any potential performance issues for psql8.3
> please?
> version (PostgreSQL 8.3.18 on x86_64-unknown-linux-gnu, compiled by GCC
> 4.1.2)
>
> E.g., got 10 idle connections for 10 days.
> select current_query from pg_sta
This might also help:
http://www.postgresql.org/docs/9.1/static/populate.html
Bulk load tables from text files in almost all RDMS are "log free"
(Postgres' COPY is one of them).
The reason is that the database doesn't need to waste resources by writing
the log because there's no risk of data loss
Hi Emi,
Databases that comply to the ACID standard (
http://en.wikipedia.org/wiki/ACID) ensure that that are no data loss by
first writing the data changes to the database log in opposition to
updating the actual data on the filesystem first (on the datafiles).
Each database has its own way of do
Your question: Is there any way that I can build multiple indexes on one
table without having to scan the table multiple times?
My answer: I don't think so. Since each index has a different indexing
rule, it will analyze the same table in a different way. I've built indexes
on a 100GB table recent
Hi Nicolas,
I do believe Postgresql can handle that.
I've worked with tables that have 2 millions rows per day, which give us an
average of 700 mi/year.
It's hard to say how much hardware power you will need, but I would say
test it with a server in the cloud, since servers in the cloud are usua
10 matches
Mail list logo