Hello,
I have a table which contains 5 columns the last 3 indicate wether the
employer worked with given project or not. The values are simply Yes
and No. What I need is a list of people who haven't worked in at least
one project.
So for example we have data:
John Smith Yes No Yes
Tim Robins No
Lukasz Brodziak wrote:
> I have a table which contains 5 columns the last 3 indicate wether the
> employer worked with given project or not. The values are simply Yes
> and No. What I need is a list of people who haven't worked in at least
> one project.
> So for example we have data:
>
> John Smi
Hi,
We run a fairly write-intensive workload and are looking at upgrading
our Pg servers. (PostgreSQL 9.1; no practical way to upgrade to 9.2 for
a while because we use what's packaged with Debian.)
I'm considering the following configuration:
Dual 4-core Intel CPU (E5620 at 2.4GHz)
192G
On 08/12/2013 08:28 AM, David F. Skoll wrote:
Hi,
We run a fairly write-intensive workload and are looking at upgrading
our Pg servers. (PostgreSQL 9.1; no practical way to upgrade to 9.2 for
a while because we use what's packaged with Debian.)
apt.postgresql.org
I'm considering the foll
On Mon, Aug 12, 2013 at 8:28 AM, David F. Skoll wrote:
>
> 3) Our current workload peaks at about 5000 transactions per second;
> you can assume about one-third to one-half of those are writes. Do
> you think we can get away with 16 10Krpm SATA drives instead of the
> SSDs?
>
pgbench peaks out a
When you say "16 10K drives," do you mean:
a) RAID 0 with 16 drives?
b) RAID 1 with 8+8 drives?
c) RAID 5 with 12 drives?
d) RAID 1 with 7+7 drives and 2 hotspares?
We moved from a 14 FC drive (15k RPM) array (6+6 with 2 hotspares) to a 6 SSD
array (2+2 with 2 hotspares) because our iops wo
On Mon, 12 Aug 2013 11:01:09 -0500 (CDT)
Scott Whitney wrote:
> When you say "16 10K drives," do you mean:
I mean 8 RAID-1 pairs with data striped across the pairs. The Linux
software RAID "offset" scheme is described here:
http://www.ilsistemista.net/index.php/linux-a-unix/35-linux-software-r
On Mon, Aug 12, 2013 at 08:33:04AM -0700, Joshua D. Drake wrote:
> >1) Has anyone had experience with Intel 520 SSDs? Are they reliable?
> >When they fail, do they fail nicely (ie, failure detected and bad drive
> >removed from RAID array) or horribly (data silently corrupted...) ?
>
> I don't re
On Mon, Aug 12, 2013 at 1:05 PM, Bruce Momjian wrote:
> On Mon, Aug 12, 2013 at 08:33:04AM -0700, Joshua D. Drake wrote:
>> >1) Has anyone had experience with Intel 520 SSDs? Are they reliable?
>> >When they fail, do they fail nicely (ie, failure detected and bad drive
>> >removed from RAID array
On Mon, 12 Aug 2013 16:05:04 -0400
Bruce Momjian wrote:
> On Mon, Aug 12, 2013 at 08:33:04AM -0700, Joshua D. Drake wrote:
> > Put the pg_xlog on spindles, they are more than fast enough and
> > won't eat up the write life of your SSDs.
> Given its small size and need for fast fsync, I have WAL
Actually it is kinda complicated:
*dev(9.1.9, for which pgadmin3 complains w/ that error)*
postgres=# \dx+ adminpack
Objects in extension "adminpack"
Object Description
---
function pg_file_length(text)
function pg_file_read(text,bigint,bi
On Mon, Aug 12, 2013 at 04:41:48PM -0400, David F. Skoll wrote:
> On Mon, 12 Aug 2013 16:05:04 -0400
> Bruce Momjian wrote:
>
> > On Mon, Aug 12, 2013 at 08:33:04AM -0700, Joshua D. Drake wrote:
>
> > > Put the pg_xlog on spindles, they are more than fast enough and
> > > won't eat up the write
On Aug 12, 2013, at 4:15 PM, Lonni J Friedman wrote:
> On Mon, Aug 12, 2013 at 1:05 PM, Bruce Momjian wrote:
>> On Mon, Aug 12, 2013 at 08:33:04AM -0700, Joshua D. Drake wrote:
1) Has anyone had experience with Intel 520 SSDs? Are they reliable?
When they fail, do they fail nicely (ie
I tested the hybrid approach during my months-long testing and performance
stuff, and I was a bit underwhelmed.
That said, what _I_ personally really needed was increase in peak iops. Using
spindles for "static" data (OS, some logs, and such) worked fine, but no matter
how I split up the pg st
Hi all,
I have a few questions related to recovering from a near-miss with
transactionid wraparound.
I'm currently running a vacuum freeze in single user mode on our largest
database (about 36 TB). It's been running for about 10 days (since the
database shut itself down to avoid xid wraparoun
Natalie Wenz writes:
> ... With the speed postgres is capable of, and the ever-falling prices
> of storage making larger, faster databases possible, has the possibility
> of changing the transaction id to a 64-bit (or even 128-bit!) value been
> considered?
Not terribly seriously --- the penaltie
16 matches
Mail list logo