Hi Craig, I had done a fresh initdb,the default parameter configuration was used. I was setting few set of parameters while startup by the below command.
./postgres -d postgres -c shared_buffers=$shared_bufs -N 200 -c min_wal_size=15GB -c max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 & Now I have modified the script a bit with Robert's suggestion as below. Instead of starting it with postgres binary i have set it in conf file and starting the server with pg_ctl. I am waiting for the results,once the core dump is generated will share the details. Test code Snippet : #Pre condition: #Create and initialize the database and set the export PGDATA export PGDATA=/home/centos/PG_sources/postgresql/inst/bin/data export PGPORT=5432 export LD_LIBRARY_PATH=/home/centos/PG_sources/postgresql/inst/lib:$LD_LIBRARY_PATH #for i in "scale_factor shared_buffers time_for_readings no_of_readings orig_or_patch" for i in "300 8GB 1800 3" do scale_factor=`echo $i | cut -d" " -f1` shared_bufs=`echo $i | cut -d" " -f2` time_for_reading=`echo $i | cut -d" " -f3` no_of_readings=`echo $i | cut -d" " -f4` # ----------------------------------------------- echo "Start of script for $scale_factor $shared_bufs " >> /home/centos/test_results.txt echo "============== $run_bin =============" >> /home/centos/test_results.txt for threads in 1 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128 #for threads in 8 16 do #Start taking reading for ((readcnt = 1 ; readcnt <= $no_of_readings ; readcnt++)) do echo "================================================" >> /home/centos/test_results.txt echo $scale_factor, $shared_bufs, $threads, $threads, $time_for_reading Reading - ${readcnt} >> /home/centos/test_results.txt #start server ./pg_ctl -D data -c -l logfile start #./postgres -d postgres -c shared_buffers=$shared_bufs -N 200 -c min_wal_size=15GB -c max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 & sleep 5 #drop and recreate database ./dropdb test ./createdb test #initialize database ./pgbench -i -s $scale_factor test sleep 5 # Run pgbench ./pgbench -c $threads -j $threads -T $time_for_reading -M prepared test >> /home/centos/test_results.txt sleep 10 ./psql -d test -c "checkpoint" >> /home/centos/test_results.txt ./pg_ctl stop done; done; sleep 1 mv /home/centos/test_results.txt /home/centos/test_results_list_${scale_factor}_${shared_bufs}_rw.txt done; Regards, Neha Sharma On Thu, Jul 20, 2017 at 11:23 AM, Craig Ringer <cr...@2ndquadrant.com> wrote: > On 19 July 2017 at 20:26, Neha Sharma <neha.sha...@enterprisedb.com> >> wrote: >> >>> Hi, >>> >>> I am getting FailedAssertion while executing the attached >>> script.However,I am not able to produce the core dump for the same,the >>> script runs in background and takes around a day time to produce the >>> mentioned error. >>> >>> "TRAP: FailedAssertion("!(TransactionIdPrecedesOrEquals(oldestXact, >>> ShmemVariableCache->oldestXid))", File: "clog.c", Line: 683) >>> 2017-07-19 01:16:51.973 GMT [27873] LOG: server process (PID 28084) was >>> terminated by signal 6: Aborted >>> 2017-07-19 01:16:51.973 GMT [27873] DETAIL: Failed process was running: >>> autovacuum: VACUUM pg_toast.pg_toast_13029 (to prevent wraparound)" >>> >> >> > What are the starting conditions of your postgres instance? Does your > script assume a newly initdb'd instance with no custom configuration? If > not, what setup steps/configuration precede your script run? > > > > > >> well short of the 2-million mark. >> > > Er, billion. > > -- > Craig Ringer http://www.2ndQuadrant.com/ > PostgreSQL Development, 24x7 Support, Training & Services >