Re: [HACKERS] Patch: incorrect array offset in backend replication tar header

2012-09-27 Thread Brian Weaver
On Thu, Sep 27, 2012 at 6:43 PM, Tom Lane  wrote:
> Brian Weaver  writes:
>> OK, here is my attempt at patching and correcting the issue in this
>> thread. I have done my best to test to ensure that hot standby,
>> pg_basebackup, and pg_restore of older files work without issues. I
>> think this might be a larger patch that expected, I took some
>> liberties of trying to clean up a bit.
>
>> For example the block size '512' was scattered throughout the code
>> regarding the tar block size. I've replace instances of that with a
>> defined constant TAR_BLOCK_SIZE. I've likewise created other constants
>> and used them in place of raw numbers in what I hope makes the code a
>> bit more readable.
>
> That seems possibly reasonable ...
>
>> I've also used functions like strncpy(), strnlen(), and the like in
>> place of sprintf() where I could. Also instead of using sscanf() I
>> used a custom octal conversion routine which has a hard limit on how
>> many character it will process like strncpy() & strnlen().
>
> ... but I doubt that this really constitutes a readability improvement.
> Or a portability improvement.  strnlen for instance is not to be found
> in Single Unix Spec v2 (http://pubs.opengroup.org/onlinepubs/007908799/)
> which is what we usually take as our baseline assumption about which
> system functions are available everywhere.  By and large, I think the
> more different system functions you rely on, the harder it is to read
> your code, even if some unusual system function happens to exactly match
> your needs in particular places.  It also greatly increases the risk
> of having portability problems, eg on Windows, or non-mainstream Unix
> platforms.
>
> But a larger point is that the immediate need is to fix bugs.  Code
> beautification is a separate activity and would be better submitted as
> a separate patch.  There is no way I'd consider applying most of this
> patch to the back branches, for instance.
>
> regards, tom lane


Here's a very minimal fix then, perhaps it will be more palatable.
Even though I regret the effort I put into the first patch it's in my
employer's best interest that it's fixed. I'm obliged to try to
remediate the problem in something more acceptable to the community.

enjoy
-- 

/* insert witty comment here */


postgresql-tar.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: incorrect array offset in backend replication tar header

2012-09-27 Thread Brian Weaver
OK, here is my attempt at patching and correcting the issue in this
thread. I have done my best to test to ensure that hot standby,
pg_basebackup, and pg_restore of older files work without issues. I
think this might be a larger patch that expected, I took some
liberties of trying to clean up a bit.

For example the block size '512' was scattered throughout the code
regarding the tar block size. I've replace instances of that with a
defined constant TAR_BLOCK_SIZE. I've likewise created other constants
and used them in place of raw numbers in what I hope makes the code a
bit more readable.

I've also used functions like strncpy(), strnlen(), and the like in
place of sprintf() where I could. Also instead of using sscanf() I
used a custom octal conversion routine which has a hard limit on how
many character it will process like strncpy() & strnlen().

I expect comments, hopefully they'll be positive.

-- Brian
-- 

/* insert witty comment here */


postgresql-tar.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: incorrect array offset in backend replication tar header

2012-09-27 Thread Brian Weaver
Magnus,

I probably just did a poor job of explaining what I wanted to try. I
was going to have the base backup open two connections; one to stream
the tar archive, the second to receive the wal files like
pg_receivexlog.

The wal files received on the second connection would be streamed to a
temporary file, with tar headers. Then when the complete tar archive
from the first header was complete received simply replay the contents
from the temporary file to append them to the tar archive.

Turns out that isn't necessary. It was an idea borne from my
misunderstanding of how the pg_basebackup worked. The archive will
include all the wal files if I make wal_keep_segments high enough.

-- Brian

On Thu, Sep 27, 2012 at 6:01 PM, Magnus Hagander  wrote:
> On Tue, Sep 25, 2012 at 5:08 PM, Brian Weaver  wrote:
>> Unless I misread the code, the tar format and streaming xlog are
>> mutually exclusive. Considering my normal state of fatigue it's not
>> unlikely. I don't want to have to set my wal_keep_segments
>> artificially high just for the backup
>
> Correct, you can't use both of those at the same time. That can
> certainly be improved - but injecting a file into the tar from a
> different process is far from easy. But one idea might be to just
> stream the WAL into a *separate* tarfile in this case.
>
>
> --
>  Magnus Hagander
>  Me: http://www.hagander.net/
>  Work: http://www.redpill-linpro.com/



-- 

/* insert witty comment here */


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] EVENT Keyword and CREATE TABLE

2012-09-26 Thread Brian Weaver
I think I just got bitten hard by a commit in mid July... git sha1 3855968.

In some of our old tables going back several years we a column named
'event' as in:

CREATE TABLE tblaudittrail (
id bigint NOT NULL,
siteid integer NOT NULL,
entrytype character varying(25),
form character varying(50),
recordid integer,
field character varying(25),
changedfrom character varying(500),
changedto character varying(500),
changedon timestamp with time zone,
changedby character varying(25),
event character varying(1000)
);

I was working off trunk and the database refuses to create this table
any longer. Is this by design or is it a regression bug?

Thanks

-- Brian
-- 

/* insert witty comment here */


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: incorrect array offset in backend replication tar header

2012-09-25 Thread Brian Weaver
Tom,

I'm fine with submitting highly focused patches first. I was just
explaining my end-goal. Still I will need time to patch, compile, and
test before submitting so you're not going to see any output from me
for a few days. That's all assuming my employer can leave me alone
long enough to focus on a single task. I'm far too interrupt driven at
work.

-- Brian

On Tue, Sep 25, 2012 at 10:30 AM, Tom Lane  wrote:
> Brian Weaver  writes:
>> If you're willing to wait a bit on me to code and test my extensions
>> to pg_basebackup I will try to address some of the deficiencies as
>> well add new features.
>
> I think it's a mistake to try to handle these issues in the same patch
> as feature extensions.  If you want to submit a patch for them, I'm
> happy to let you do the legwork, but please keep it narrowly focused
> on fixing file-format deficiencies.
>
> The notes I had last night after examining pg_dump were:
>
> magic number written incorrectly, but POSIX fields aren't filled anyway
> (which is why tar tvf doesn't show them)
>
> checksum code is brain-dead; no use in "lastSum" nor in looping
>
> per spec, there should be 1024 zeroes not 512 at end of file;
> this explains why tar whines about a "lone zero block" ...
>
> Not sure which of these apply to pg_basebackup.
>
> As far as the backwards compatibility issue goes, what seems like
> a good idea after sleeping on it is (1) fix pg_dump in HEAD to emit
> standard-compliant tar files; (2) fix pg_restore in HEAD and all back
> branches to accept both the standard and the incorrect magic field.
> This way, the only people with a compatibility problem would be those
> trying to use by-then-ancient pg_restore versions to read 9.3 or later
> pg_dump output.
>
> regards, tom lane



-- 

/* insert witty comment here */


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: incorrect array offset in backend replication tar header

2012-09-25 Thread Brian Weaver
Unless I misread the code, the tar format and streaming xlog are
mutually exclusive. Considering my normal state of fatigue it's not
unlikely. I don't want to have to set my wal_keep_segments
artificially high just for the backup

On Tue, Sep 25, 2012 at 10:05 AM, Marko Tiikkaja  wrote:
> On 9/25/12 3:38 PM, Brian Weaver wrote:
>>
>> I want
>> to modify pg_basebackup to include the WAL files in the tar output.
>
>
> Doesn't pg_basebackup -x do exactly that?
>
>
> Regards,
> Marko Tiikkaja
>



-- 

/* insert witty comment here */


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: incorrect array offset in backend replication tar header

2012-09-25 Thread Brian Weaver
Tom,

I actually plan on doing a lot of work on the frontend pg_basebackup
for my employer. pg_basebackup is 90% of the way to a solution that I
need for doing backups of *large* databases while allowing the
database to continue to work. The problem is a lack of secondary disk
space to save a replication of the original database cluster. I want
to modify pg_basebackup to include the WAL files in the tar output. I
have several ideas but I need to code and test them. That was the main
reason I was examining the backend code.

If you're willing to wait a bit on me to code and test my extensions
to pg_basebackup I will try to address some of the deficiencies as
well add new features.

I agree the checksum algorithm could definitely use some refactoring.
I was already working on that before I retired last night.

-- Brian

On Mon, Sep 24, 2012 at 10:36 PM, Tom Lane  wrote:
> Brian Weaver  writes:
>> Here are lines 321 through 329 of 'archive_read_support_format_tar.c'
>> from libarchive
>
>>  321 /* Recognize POSIX formats. */
>>  322 if ((memcmp(header->magic, "ustar\0", 6) == 0)
>>  323 && (memcmp(header->version, "00", 2) == 0))
>>  324 bid += 56;
>>  325
>>  326 /* Recognize GNU tar format. */
>>  327 if ((memcmp(header->magic, "ustar ", 6) == 0)
>>  328 && (memcmp(header->version, " \0", 2) == 0))
>>  329 bid += 56;
>
>> I'm wondering if the original committer put the 'ustar00\0' string in by 
>> design?
>
> The second part of that looks to me like it matches "ustar  \0",
> not "ustar00\0".  I think the pg_dump coding is just wrong.  I've
> already noticed that its code for writing the checksum is pretty
> brain-dead too :-(
>
> Note that according to the wikipedia page, tar programs typically
> accept files as pre-POSIX format if the checksum is okay, regardless of
> what is in the magic field; and the fields that were added by POSIX
> are noncritical so we'd likely never notice that they were being
> ignored.  (In fact, looking closer, pg_dump isn't even filling those
> fields anyway, so the fact that it's not producing a compliant magic
> field may be a good thing ...)
>
> regards, tom lane



-- 

/* insert witty comment here */


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: incorrect array offset in backend replication tar header

2012-09-24 Thread Brian Weaver
Tom,

I'm still investigating and I have been looking at various sources. I
have checked lots of pages on the web and I was just looking at the
libarchive source from github. I found an interesting sequence in
libarchive that implies that the 'ustar00\0' marks the header as GNU
Tar format.

Here are lines 321 through 329 of 'archive_read_support_format_tar.c'
from libarchive

 321 /* Recognize POSIX formats. */
 322 if ((memcmp(header->magic, "ustar\0", 6) == 0)
 323 && (memcmp(header->version, "00", 2) == 0))
 324 bid += 56;
 325
 326 /* Recognize GNU tar format. */
 327 if ((memcmp(header->magic, "ustar ", 6) == 0)
 328 && (memcmp(header->version, " \0", 2) == 0))
 329 bid += 56;

I'm wondering if the original committer put the 'ustar00\0' string in by design?

Regardless I'll look at it more tomorrow 'cause I'm calling it a
night. I need to send a note to the libarchive folks too because I
*think* I found a potential buffer overrun in one of their octal
conversion routines.

-- Brian

On Mon, Sep 24, 2012 at 10:07 PM, Tom Lane  wrote:
> Brian Weaver  writes:
>> While researching the way streaming replication works I was examining
>> the construction of the tar file header. By comparing documentation on
>> the tar header format from various sources I certain the following
>> patch should be applied to so the group identifier is put into thee
>> header properly.
>
> Yeah, this is definitely wrong.
>
>> While I realize that wikipedia isn't always the best source of
>> information, the header offsets seem to match the other documentation
>> I've found. The format is just easier to read on wikipedia
>
> The authoritative specification can be found in the "pax" page in the
> POSIX spec, which is available here:
> http://pubs.opengroup.org/onlinepubs/9699919799/
>
> I agree that the 117 number is bogus, and also that the magic "ustar"
> string is written incorrectly.  What's more, it appears that the latter
> error has been copied from pg_dump (but the 117 seems to be just a new
> bug in pg_basebackup).  I wonder what else might be wrong hereabouts :-(
> Will sit down and take a closer look.
>
> I believe what we need to do about this is:
>
> 1. fix pg_dump and pg_basebackup output to conform to spec.
>
> 2. make sure pg_restore will accept both conformant and
>previous-generation files.
>
> Am I right in believing that we don't have any code that's expected to
> read pg_basebackup output?  We just feed it to "tar", no?
>
> I'm a bit concerned about backwards compatibility issues.  It looks to
> me like existing versions of pg_restore will flat out reject files that
> have a spec-compliant "ustar\0" MAGIC field.  Is it going to be
> sufficient if we fix this in minor-version updates, or are we going to
> need to have a switch that tells pg_dump to emit the incorrect old
> format?  (Ick.)
>
> regards, tom lane



-- 

/* insert witty comment here */


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: incorrect array offset in backend replication tar header

2012-09-24 Thread Brian Weaver
Um I apologize for the third e-mail on the topic. It seems that my
C coding is a bit rusty from years of neglect. No sooner had I hit the
send button then I realized that trying to embed a null character in a
string might not work, especially when it's followed by two
consecutive zeros.

Here is a safer fix which is more in line with the rest of the code in
the file. I guess this is what I get for being cleaver.

Patch is attached

-- 

/* insert witty comment here */


pg-tar.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: incorrect array offset in backend replication tar header

2012-09-24 Thread Brian Weaver
Actually I found one other issue while continuing my investigation.
The insertion of the 'ustar' and version '00' has the '00' version at
the wrong offset. The patch is attached.

-- Brian

On Mon, Sep 24, 2012 at 7:51 PM, Brian Weaver  wrote:
> While researching the way streaming replication works I was examining
> the construction of the tar file header. By comparing documentation on
> the tar header format from various sources I certain the following
> patch should be applied to so the group identifier is put into thee
> header properly.
>
> While I realize that wikipedia isn't always the best source of
> information, the header offsets seem to match the other documentation
> I've found. The format is just easier to read on wikipedia
>
> http://en.wikipedia.org/wiki/Tar_(file_format)#File_header
>
> Here is the trivial patch:
>
> diff --git a/src/backend/replication/basebackup.c
> b/src/backend/replication/basebackup.c
> index 4aaa9e3..524223e 100644
> --- a/src/backend/replication/basebackup.c
> +++ b/src/backend/replication/basebackup.c
> @@ -871,7 +871,7 @@ _tarWriteHeader(const char *filename, const char
> *linktarget,
> sprintf(&h[108], "%07o ", statbuf->st_uid);
>
> /* Group 8 */
> -   sprintf(&h[117], "%07o ", statbuf->st_gid);
> +   sprintf(&h[116], "%07o ", statbuf->st_gid);
>
> /* File size 12 - 11 digits, 1 space, no NUL */
> if (linktarget != NULL || S_ISDIR(statbuf->st_mode))
>
>
> -- Brian
>
> --
>
> /* insert witty comment here */



-- 

/* insert witty comment here */


pg-tar.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch: incorrect array offset in backend replication tar header

2012-09-24 Thread Brian Weaver
While researching the way streaming replication works I was examining
the construction of the tar file header. By comparing documentation on
the tar header format from various sources I certain the following
patch should be applied to so the group identifier is put into thee
header properly.

While I realize that wikipedia isn't always the best source of
information, the header offsets seem to match the other documentation
I've found. The format is just easier to read on wikipedia

http://en.wikipedia.org/wiki/Tar_(file_format)#File_header

Here is the trivial patch:

diff --git a/src/backend/replication/basebackup.c
b/src/backend/replication/basebackup.c
index 4aaa9e3..524223e 100644
--- a/src/backend/replication/basebackup.c
+++ b/src/backend/replication/basebackup.c
@@ -871,7 +871,7 @@ _tarWriteHeader(const char *filename, const char
*linktarget,
sprintf(&h[108], "%07o ", statbuf->st_uid);

/* Group 8 */
-   sprintf(&h[117], "%07o ", statbuf->st_gid);
+   sprintf(&h[116], "%07o ", statbuf->st_gid);

/* File size 12 - 11 digits, 1 space, no NUL */
if (linktarget != NULL || S_ISDIR(statbuf->st_mode))


-- Brian

-- 

/* insert witty comment here */


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with multi-job pg_restore

2012-05-01 Thread Brian Weaver
Tom,

The restore appears to have finished without a problem. The issue I
have is a running instance of postgres is still active in a COPY state
after the restore. The process is running full tilt, almost like it's
in a tight loop condition.

-- Brian

On Tue, May 1, 2012 at 1:44 PM, Tom Lane  wrote:
> Brian Weaver  writes:
>> Doh! I missed a script that was run by cron that does a nightly
>> backup. That's the likely offender for the 'copy-to-stdout'
>
>> I've removed it from the nightly run. I'll see if have any better luck
>> with this run. Still not sure about the best way to debug the issue
>> though. Any pointers would be appreciated.
>
> Well, given that this takes so long, adding PID and time of day to your
> log_line_prefix would probably be a smart idea.  But at this point it
> sounds like what failed was the cron job.  Are you sure that the
> pg_restore didn't finish just fine?  If it didn't, what's the evidence?
>
>                        regards, tom lane



-- 

/* insert witty comment here */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with multi-job pg_restore

2012-05-01 Thread Brian Weaver
Doh! I missed a script that was run by cron that does a nightly
backup. That's the likely offender for the 'copy-to-stdout'

I've removed it from the nightly run. I'll see if have any better luck
with this run. Still not sure about the best way to debug the issue
though. Any pointers would be appreciated.

On Tue, May 1, 2012 at 12:59 PM, Tom Lane  wrote:
> Brian Weaver  writes:
>
> I'm confused.  A copy-to-stdout ought to be something that pg_dump
> would do, not pg_restore.  Are you sure this is related at all?
>
>                        regards, tom lane



-- 

/* insert witty comment here */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with multi-job pg_restore

2012-05-01 Thread Brian Weaver
Here's the steps I'm taking

# pg_restore -l database.dump > database.list

Edit the database.list file and comment out the 'PROCEDURAL LANGUAGE -
plpgsql' item. Because the dump comes from an 8.4 system without
PL/PgSQL installed by default the create language emitted during
pg_restore causes the restore to fail; PL/PgSQL is installed by
default in 9.1.

# pg_restore --jobs=24 --create --dbname=template1
--use-list=database.list --verbose database.dump

After letting the restore run overnight I find the error message in
the log file and still one running 'postmaster' process with the
arguments 'postgresql dbname [local] COPY' still active. The
pg_restore command exited with a zero status as if all went well. I
checked and no other clients are connected to the system so my only
thought is that this is an orphaned instance from the pg_restore.

I'm hoping it's all related because I stopped all the running process
that would normally access the database. I'm in the phase of testing
the restore process because we're new to dealing with multi-terabyte
data and we're  trying to figure out our options on supporting with
such large data sets going forward.

I guess I could add the PID of the process generating the error in the
log; if it's the pid of the leftover postgresql process that would be
a bit more definitive. It will probably be tomorrow afternoon before I
have any answer that include the PID given the runtime issues.

-- Brian

On Tue, May 1, 2012 at 12:59 PM, Tom Lane  wrote:
> Brian Weaver  writes:
>> I think I've discovered an issue with multi-job pg_restore on a 700 GB
>> data file created with pg_dump.
>
> Just to clarify, you mean parallel restore, right?  Are you using any
> options beyond -j, that is any sort of selective restore?
>
>> The problem occurs during the restore when one of the bulk loads
>> (COPY) seems to get disconnected from the restore process. I captured
>> stdout and stderr from the pg_restore execution and there isn't a
>> single hint of a problem. When I look at the log file in the
>> $PGDATA/pg_log directory I found the following errors:
>
>> LOG:  could not send data to client: Connection reset by peer
>> STATEMENT:  COPY public.outlet_readings_rollup (id, outlet_id,
>> rollup_interval, reading_time, min_current, max_current,
>> average_current, min_active_power, max_active_power,
>> average_active_power, min_apparent_power, max_apparent_power,
>> average_apparent_power, watt_hour, pdu_id, min_voltage, max_voltage,
>> average_voltage) TO stdout;
>
> I'm confused.  A copy-to-stdout ought to be something that pg_dump
> would do, not pg_restore.  Are you sure this is related at all?
>
>                        regards, tom lane



-- 

/* insert witty comment here */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Problem with multi-job pg_restore

2012-05-01 Thread Brian Weaver
I think I've discovered an issue with multi-job pg_restore on a 700 GB
data file created with pg_dump. Before anyone points out that the
preferred procedure is to use the newest pg_dump to backup a database
before doing pg_restore let just say, "Yes I'm aware of that advice
and unfortunately it just isn't an option."

Here is the dump file information (acquired via pg_restore -l)

; Archive created at Wed Mar  7 10:51:40 2012
; dbname: raritan
; TOC Entries: 756
; Compression: -1
; Dump Version: 1.11-0
; Format: CUSTOM
; Integer: 4 bytes
; Offset: 8 bytes
; Dumped from database version: 8.4.9
; Dumped by pg_dump version: 8.4.9

The problem occurs during the restore when one of the bulk loads
(COPY) seems to get disconnected from the restore process. I captured
stdout and stderr from the pg_restore execution and there isn't a
single hint of a problem. When I look at the log file in the
$PGDATA/pg_log directory I found the following errors:

LOG:  could not send data to client: Connection reset by peer
STATEMENT:  COPY public.outlet_readings_rollup (id, outlet_id,
rollup_interval, reading_time, min_current, max_current,
average_current, min_active_power, max_active_power,
average_active_power, min_apparent_power, max_apparent_power,
average_apparent_power, watt_hour, pdu_id, min_voltage, max_voltage,
average_voltage) TO stdout;

I'm running PostgreSQL 9.1.3 on a CentOS 6 x86-64 build. I'm a
developer by trade so I'm good with building from the latest source
and using debugging tools as necessary. What I'm really looking for is
advice on how to maximize the information I get so that I can minimize
the number of times I have to run the restore. The restore process
takes at least a day to complete (discounting the disconnected COPY
process) and I don't have weeks to figure out what's going on.

Thanks

-- Brian
-- 

/* insert witty comment here */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers