On Tue, Oct 1, 2013 at 5:30 PM, akp geek wrote:
> I tried tunneling this morning and it did not work. when tried the tunneling
> command in the url you mentioned getting following error. I will try to find
> what exactly this mean , but any help is appreciated.
>
> command-line: line 0: Bad config
On Tue, Oct 1, 2013 at 6:15 PM, Jaime Casanova wrote:
> you don't need to use streaming replication for a hot standby, it
> works perfectly well even if you replay everything from archive and
> never do streaming.
Right, I mixed up a with the terms a bit.
> but it would be a good idea to set hot
On Tue, Oct 1, 2013 at 3:00 PM, Mark Jones wrote:
> Thanks for your quick response John.
>
> From the limited information, it is mostly relational.
> As for usage patterns, I do not have that yet.
> I was just after a general feel of what is out there size wise.
>
Usage patterns are going to be c
I have the following query.
with parsed_data as (
SELECT
devicereportedtime ,
DATE_TRUNC('minute', devicereportedtime - (EXTRACT(minute FROM
devicereportedtime)::integer % 5 || 'minutes')::interval) as interval_start
FROM systemevents
WHERE devicereportedtime >=
On 10/1/2013 6:53 PM, Stephen Frost wrote:
I don't think I'd recommend building a single-image PG database on that
scale but rather would shard it.
sharding only works well if your data has natural divisions and you're
not doing complex joins/aggregates across those divisions.
--
john r pi
* John R Pierce (pie...@hogranch.com) wrote:
> if we assume the tables average 1KB/record (which is a fairly large
> record size even including indexing), you're looking at 400 billion
> records. if you can populate these at 5000 records/second, it
> would take 2.5 years of 24/7 operation to popu
On Tue, Oct 1, 2013 at 5:46 PM, Sergey Konoplev wrote:
> On Tue, Oct 1, 2013 at 2:03 PM, akp geek wrote:
>> One more thing.. pardon me for being dumb
>>
>> I want to set the 2 nd slave as HOT STAND BY, not steaming ..
>
> Hot standby assumes being streaming. You can not establish a hot
> standb
thanks. I can try this. Any idea for the message below. Thanks for the
patience
I tried tunneling this morning and it did not work. when tried the
tunneling command in the url you mentioned getting following error. I will
try to find what exactly this mean , but any help is appreciated.
command-l
On Tue, Oct 1, 2013 at 2:03 PM, akp geek wrote:
> One more thing.. pardon me for being dumb
>
> I want to set the 2 nd slave as HOT STAND BY, not steaming ..
Hot standby assumes being streaming. You can not establish a hot
standby without using streaming replication. What is the reason not to
d
On 02/10/13 07:49, Mark Jones wrote:
> Hi all,
>
> We are currently working with a customer who is looking at a database
> of between 200-400 TB! They are after any confirmation of PG working
> at this size or anywhere near it.
> Anyone out there worked on anything like this size in PG please? If
>
On 10/1/2013 3:00 PM, Mark Jones wrote:
>From the limited information, it is mostly relational.
phew. thats going to be a monster.400TB on 600GB 15000rpm SAS
drives in raid10 will require around 1400 drives. at 25 disks per 2U
drive tray, thats 2 6' racks of nothing but disks, and to
Maybe some of these folks can chime in?
http://cds.u-strasbg.fr/
Simbad (and I think VisieR) runs on PostgreSQL. A friend of mine is a
grad student in astronomy and he told me about them.
Jeff Ross
On 10/1/13 3:49 PM, Mark Jones wrote:
Hi all,
We are currently working with a customer who i
Thanks for your quick response John.
>From the limited information, it is mostly relational.
As for usage patterns, I do not have that yet.
I was just after a general feel of what is out there size wise.
Regards
Mark Jones
Principal Sa
On 10/1/2013 2:49 PM, Mark Jones wrote:
We are currently working with a customer who is looking at a database
of between 200-400 TB! They are after any confirmation of PG working
at this size or anywhere near it.
is that really 200-400TB of relational data, or is it 199-399TB of bulk
data (b
Hi all,
We are currently working with a customer who is looking at a database of
between 200-400 TB! They are after any confirmation of PG working at this
size or anywhere near it.
Anyone out there worked on anything like this size in PG please? If so, can
you let me know more details etc..
--
One more thing.. pardon me for being dumb
I want to set the 2 nd slave as HOT STAND BY, not steaming ..
What would be steps.
on the primary I will have the archive_command
on the slave in the recover.conf , restore_command.
After I make my slave as exactly as master, How can the slave gets
2013/10/1 Perry Smith
>
> On Oct 1, 2013, at 12:23 PM, Adrian Klaver
> wrote:
>
> > Assuming you are not doing this in a function, you can. Do UPDATE, then
> SELECT to see your changes or not and then ROLLBACK.
>
> Ah... yes. I forgot you can see the changes within the same transaction.
> Dohh
On Oct 1, 2013, at 12:23 PM, Adrian Klaver wrote:
> On 10/01/2013 10:16 AM, Perry Smith wrote:
>> With "make" I can do "make -n" and it just tells me what it would do but
>> doesn't actually do anything.
>>
>> How could I do that with SQL?
>>
>> I want to write a really complicated (for me) S
On 10/01/2013 10:16 AM, Perry Smith wrote:
With "make" I can do "make -n" and it just tells me what it would do but
doesn't actually do anything.
How could I do that with SQL?
I want to write a really complicated (for me) SQL UPDATE statement. I'm sure I
won't get it right the first time. I
With "make" I can do "make -n" and it just tells me what it would do but
doesn't actually do anything.
How could I do that with SQL?
I want to write a really complicated (for me) SQL UPDATE statement. I'm sure I
won't get it right the first time. Is there an easy way to not really make the
c
it is a firewall issue. they can't open the port that we requested it.
so as you mentioned tunnel to the primary via tunnel. will give that a try
regards
On Mon, Sep 30, 2013 at 11:10 PM, Chris Travers wrote:
>
>
>
> On Mon, Sep 30, 2013 at 7:14 PM, akp geek wrote:
>
>> Hi all -
>>
>>
Hi,
I was looking for options to make sure SQLs executed as part of functions
also get logged. Since this is a production system, I wanted to do it
without the EXPLAIN also written to the logs. May be that is not possible?
Regards,
Jayadevan
On Mon, Sep 30, 2013 at 5:08 PM, Albe Laurenz wrote:
>
22 matches
Mail list logo