Hi,

I did it before rising the question: no ZFS, Solaris 10 64 bit, I am personally 
created manually 2G files – no limitations, the logs location is correct and 
individual files timestamps are up today.  It's not always happening – 1-3 
times a day. I inspected my config file and didn't see any destination that 
isn't /data/postgres. Have I perform any specific setting to limit it into 
/data/postgres?

 

Sincerely yours,

 

 

Yuri Levinsky, DBA

Celltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel

Mobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222

 

From: Lou Picciano [mailto:loupicci...@comcast.net] 
Sent: Tuesday, June 25, 2013 5:34 PM
To: Yuri Levinsky
Cc: pgsql-bugs@postgresql.org
Subject: Re: [BUGS] Postgres crash? could not write to log file: No spaceleft 
on device

 

Yuri, You're sure the pg xlogs are going where you expect them to? Have you 
fine-tooth-combed your conf file for log file-related settings? Log files may 
well be directed to fs other than /data/postgres (as is common in our 
environments, e.g.)

Do a $ df -h on the various FSes involved...

Are you using Solaris 10 ACLs? Dig deeper on Tom's point on user-specific 
quotas. ZFS in use? Various quota settings under Solaris can get you really 
unexpected mileage.

Lou Picciano

----- Original Message -----
From: Yuri Levinsky 
To: pgsql-bugs@postgresql.org
Sent: Tue, 25 Jun 2013 12:23:00 -0000 (UTC)
Subject: [BUGS] Postgres crash? could not write to log file: No space left on 
device




Dear All,

I have the following issue on Sun Solaris 10. PostgreSQL version is 9.2.3. The 
wall logging is minimal and no archiving. The DB restarted several time, the 
box is up for last 23 days. The PostgreSQL installation and files under 
/data/postgres that is half empty. Is it some other destination that might 
cause the problem? Can I log the space consumption and directory name where the 
problem is happening by some debug level or trace setting?  

 

PANIC:  could not write to log file 81, segment 125 at offset 13959168, length 
1392640: No space left on device

LOG:  process 10203 still waiting for ShareLock on transaction 3010915 after 
1004.113 ms

STATEMENT:  UPDATE tctuserinfo SET clickmiles = clickmiles + $1, 
periodicalclickmiles = periodicalclickmiles + $2, active = $3, activeupdatetime 
= $4, activationsetby = $5, smscid = $6, sdrtime = $7, simvalue = simvalue + 
$8, totalsimval

ue = totalsimvalue + $9, firstclick = $10, lastclick = $11, firstactivationtime 
= $12, cbchannel = $13, clickmilesupdatetime = $14, ci = $15, lac = $16, bscid 
= $17, lastlocationupdatetime = $18, subscriptiontype = $19, contentcategory =

$20, livechannels = $21, contextclicks = $22 WHERE phonenumber = $23

LOG:  WAL writer process (PID 10476) was terminated by signal 6

LOG:  terminating any other active server processes

FATAL:  the database system is in recovery mode

 

But it looks OK:

 

dbnetapp:/vol/postgres

                        90G    44G    46G    49%    /data/postgres

 

Is it possible that “heavy” queries consuming disk space (as temporary space) 
and after the crash and recovery it becoming OK? 

 




This mail was received via Mail-SeCure System.

<<image003.jpg>>

Reply via email to