John,
> When a write() to a Fusion-io device has been acknowledged, the data is
> guaranteed to be stored safely. This is a strict requirement for any
> enterprise-ready storage device.
Thanks for the clarification!
While you're here, any general advice on configuring fusionIO devices
for datab
On Dec 23, 2010, at 13:22:32, Ben Chobot wrote:
>
> On Dec 23, 2010, at 11:58 AM, Andy wrote:
> >
> > Somewhat tangential to the current topics, I've heard that FusionIO
>uses
> > internal cache and hence is not crash-safe, and if the cache is turned
> > off performance will take a big hit. Is tha
On Dec 23, 2010, at 11:58 AM, Andy wrote:
>
> Somewhat tangential to the current topics, I've heard that FusionIO uses
> internal cache and hence is not crash-safe, and if the cache is turned off
> performance will take a big hit. Is that your experience?
It does use an internal cache, but it
On Dec 23, 2010, at 12:52 PM, Tom Lane wrote:
> Ben writes:
>> i have a schema similar to the following
>
>> create index foo_s_idx on foo using btree (s);
>> create index foo_e_idx on foo using btree (e);
>
>> i want to do queries like
>
>> select * from foo where 150 between s and e;
>
> T
Ben writes:
> i have a schema similar to the following
> create index foo_s_idx on foo using btree (s);
> create index foo_e_idx on foo using btree (e);
> i want to do queries like
> select * from foo where 150 between s and e;
That index structure is really entirely unsuited to what you want
hello --
i have a schema similar to the following
create table foo (
id integer not null,
val integer not null,
s integer not null,
e integer not null
);
create index foo_s_idx on foo using btree (s);
create index foo_e_idx on foo using btree (e);
i want to do queries like
select * from fo
--- On Thu, 12/23/10, John W Strange wrote:
> Typically my problem is that the
> large queries are simply CPU bound.. do you have a
> sar/top output that you see. I'm currently setting up two
> FusionIO DUO @640GB in a lvm stripe to do some testing with,
> I will publish the results after I'm d
On Thu, 2010-12-23 at 11:24 -0700, Scott Marlowe wrote:
> On Thu, Dec 23, 2010 at 10:37 AM, Przemek Wozniak wrote:
> > When testing the IO performance of ioSAN storage device from FusionIO
> > (650GB MLC version) one of the things I tried is a set of IO intensive
> > operations in Postgres: bulk d
John W Strange wrote:
> Typically my problem is that the large queries are simply CPU
> bound.
Well, if your bottleneck is CPU, then you're obviously not going to
be driving another resource (like disk) to its limit. First,
though, I want to confirm that your "CPU bound" case isn't in the
"I/
Typically my problem is that the large queries are simply CPU bound.. do you
have a sar/top output that you see. I'm currently setting up two FusionIO DUO
@640GB in a lvm stripe to do some testing with, I will publish the results
after I'm done.
If anyone has some tests/suggestions they would
On Thu, Dec 23, 2010 at 10:37 AM, Przemek Wozniak wrote:
> When testing the IO performance of ioSAN storage device from FusionIO
> (650GB MLC version) one of the things I tried is a set of IO intensive
> operations in Postgres: bulk data loads, updates, and queries calling
> for random IO. So far
When testing the IO performance of ioSAN storage device from FusionIO
(650GB MLC version) one of the things I tried is a set of IO intensive
operations in Postgres: bulk data loads, updates, and queries calling
for random IO. So far I cannot make Postgres take advantage of this
tremendous IO capaci
Hi,
On Thursday 23 December 2010 17:53:24 Desmond Coertzen wrote:
> Is is possible to create an index on a field on a function that returns a
> data type that contains subfields?
> Is this possible? How would I write the statement?
I am not sure I understood you correctly. Maybe you mean something
Hello,
Is is possible to create an index on a field on a function that returns a
data type that contains subfields?
It is possible to do this:
create index indx_test
on address
(sp_address_text_to_template(address_text))
where (sp_address_text_to_template(address_text)).city_name =
'some_city_on_
What you still haven't clarified is how long each exe/user combo keeps
the connection open for.
If for a day, then who cares that it takes 4 seconds each morning to
open them all?
If for a fraction of a second, then you do not need 200 simultaneous
open connections, they can probably share a much
On Thu, Dec 23, 2010 at 9:37 PM, Kevin Grittner wrote:
> tuanhoanganh wrote:
>
> > Could you show me what parameter of pgbouncer.ini can do that. I
> > read pgbouncer and can not make pgbouncer open and keep 200
> > connect to postgres
>
> What makes you think that 200 connections to PostgreSQL
On Thu, Dec 23, 2010 at 09:20:59PM +0700, tuanhoanganh wrote:
> Could you show me what parameter of pgbouncer.ini can do that. I read
> pgbouncer and can not make pgbouncer open and keep 200 connect to postgres
> (Sorry for my English)
>
> Thanks you very much.
>
> Tuan Hoang ANh
>
You need to
tuanhoanganh wrote:
> Could you show me what parameter of pgbouncer.ini can do that. I
> read pgbouncer and can not make pgbouncer open and keep 200
> connect to postgres
What makes you think that 200 connections to PostgreSQL will be a
good idea? Perhaps you want a smaller number of connecti
Could you show me what parameter of pgbouncer.ini can do that. I read
pgbouncer and can not make pgbouncer open and keep 200 connect to postgres
(Sorry for my English)
Thanks you very much.
Tuan Hoang ANh
On Wed, Dec 22, 2010 at 7:13 PM, Gurjeet Singh wrote:
> On Wed, Dec 22, 2010 at 6:28 AM, t
19 matches
Mail list logo