Re: preparing for GCC 4.9

2014-05-30 Thread Adam Conrad
On Thu, May 08, 2014 at 05:25:02PM +0200, Matthias Klose wrote:
 
 I would like to see some partial test rebuilds (like buildd or minimal chroot
 packages) for other architectures. Any possibility to setup such a test 
 rebuild
 for some architectures by the porters? Afaics the results for the GCC 
 testsuite
 look okish for every architecture.

I'm confident that other than one or two potential outliers, test build
results on powerpc and ppc64 should have the same number of regressions
as ppc64el, and also quite confident that where that's not the case, we
can get it fixed in a hurry, so please do those arches in lockstep with
the rest.

... Adam

PS: Switching hats to arm64, that one should also rev with the rest,
but I think that's probably a no-brainer anyway, given that it's
a new ports, where staying on the cutting edge is usually sanee.


-- 
To UNSUBSCRIBE, email to debian-sparc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140530100040.gw28...@0c3.net



Re: Bug#724469: FTBFS on all big-endian architectures

2014-03-23 Thread Adam Conrad
On Sun, Mar 23, 2014 at 10:33:32AM +0100, intrigeri wrote:
 
 I'd rather not drop s390x from the list of architectures this package
 is built for, but this RC bug has now been around for 6 months, and at
 some point I'll want to get rid of it.

Not fixing a bug isn't the way to get rid of it.  This isn't s390x
specific, it's incorrect code leading to failure on 64-bit BE arches,
of which there are several, it just happens that s390x is the only
officially-supported one.

I understand that you personally may not have the skills to fix it,
and need input from either a porter or upstream, but that doesn't mean
the bug magically doesn't exist if no one gives you a patch to fix it.

... Adam


-- 
To UNSUBSCRIBE, email to debian-sparc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140323200311.gf21...@0c3.net



Re: Bug#724469: FTBFS on all big-endian architectures

2014-03-22 Thread Adam Conrad
On Sat, Mar 22, 2014 at 11:53:23AM +0100, intrigeri wrote:
 
 AFAICT the latest patch proposed by upstream on February 9 [1] has
 been tested on mips only. My understanding is that upstream has been
 waiting for more test results since then. Can anyone please test this
 on other big-endian architectures?
 
 It would good if we could at least fix this for the 32-bit ones.
 
 [1] 
 https://rt.cpan.org/Ticket/Attachment/1324475/702426/0001-Fix-return-value-handling-on-big-endian-architecture.patch

Works fine for me on powerpc, but fails miserably on s390x:

t/00-basic-types.t  ok 
t/arg-checks.t  ok   
t/arrays.t  1/29 
#   Failed test at t/arrays.t line 14.
#  got: '0'
# expected: '6'
Out of memory!
# Looks like you planned 29 tests but ran 2.
# Looks like you failed 1 test of 2 run.
# Looks like your test exited with 1 just after 2.
t/arrays.t  Failed 28/29 subtests 
t/boxed.t . ok 
t/cairo-integration.t . ok   
t/callbacks.t . 1/25 
#   Failed test at t/callbacks.t line 14.
#  got: '6941192'
# expected: '23'

#   Failed test at t/callbacks.t line 16.
#  got: '894'
# expected: '23'

#   Failed test at t/callbacks.t line 17.
#  got: '894'
# expected: '23'

#   Failed test at t/callbacks.t line 18.
#  got: '-1071533088'
# expected: '46'

#   Failed test at t/callbacks.t line 22.
#  got: '0'
# expected: '23'

#   Failed test at t/callbacks.t line 26.
#  got: '-1071861040'
# expected: '23'
# Looks like you failed 6 tests of 25.
t/callbacks.t . Dubious, test returned 6 (wstat 1536, 0x600)
Failed 6/25 subtests 
t/closures.t .. ok   
t/constants.t . ok   
t/enums.t . Failed 3/4 subtests 
t/hashes.t  ok   
t/interface-implementation.t .. ok   
t/objects.t ... ok 
t/structs.t ... ok   
t/values.t  ok   
t/vfunc-chaining.t  ok 
t/vfunc-ref-counting.t  ok 

Test Summary Report
---
t/arrays.t  (Wstat: 9 Tests: 2 Failed: 1)
  Failed test:  2
  Non-zero wait status: 9
  Parse errors: Bad plan.  You planned 29 tests but ran 2.
t/callbacks.t   (Wstat: 1536 Tests: 25 Failed: 6)
  Failed tests:  3, 6, 9, 14, 19, 25
  Non-zero exit status: 6
t/enums.t   (Wstat: 11 Tests: 1 Failed: 0)
  Non-zero wait status: 11
  Parse errors: Bad plan.  You planned 4 tests but ran 1.
Files=16, Tests=297, 222 wallclock secs ( 0.07 usr  0.03 sys + 11.27 cusr 39.60 
csys = 50.97 CPU)
Result: FAIL
Failed 3/16 test programs. 7/297 subtests failed.


-- 
To UNSUBSCRIBE, email to debian-sparc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140322162023.ge21...@0c3.net



RE: Losing my mind, RAID1 on Sparc completely broken?

2004-03-09 Thread Adam Conrad
Antonio Prioglio wrote:
 
 You need to leave the block 0 alone and start the partitions 
 on block 1.
 
 It is a known issue on sparcs.

It's my understanding that SILO needs to be installed to a partition
that begins on cylinder 0, or it won't boot.  Regardless, I fail to see
how this could be affecting a RAID on partitions later on the disks.

... Adam



Re: Losing my mind, RAID1 on Sparc completely broken?

2004-03-08 Thread Adam Conrad
Following up to my previous post, here is the config for the RAID array
that I just can't keep living:

--- 8 ---
cranx:~# cat /etc/raidtab
raiddev /dev/md1
raid-level  1
nr-raid-disks   2
nr-spare-disks  0
chunk-size  4
persistent-superblock   1
device  /dev/hda2
raid-disk   0
device  /dev/hdc2
raid-disk   1
cranx:~# fdisk -l /dev/hda

Disk /dev/hda (Sun disk label): 16 heads, 255 sectors, 19156 cylinders
Units = cylinders of 4080 * 512 bytes

   Device FlagStart   EndBlocks   Id  System
/dev/hda1 020 40800   83  Linux native
/dev/hda221 18665  38033760   83  Linux native
/dev/hda3 0 19156  390782405  Whole disk
/dev/hda4 18666 19156999600   83  Linux native
cranx:~# fdisk -l /dev/hdc

Disk /dev/hdc (Sun disk label): 16 heads, 255 sectors, 19156 cylinders
Units = cylinders of 4080 * 512 bytes

   Device FlagStart   EndBlocks   Id  System
/dev/hdc1 020 40800   83  Linux native
/dev/hdc221 18665  38033760   83  Linux native
/dev/hdc3 0 19156  390782405  Whole disk
/dev/hdc4 18666 19156999600   83  Linux native
cranx:~# cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 hdc2[1] hda2[0]
  38033664 blocks [2/2] [UU]
--- 8 ---

If I set up a brand new filesystem on /dev/md1, mount it, write a bunch
of data to it (in this case, a copt of my root filesystem), umount it,
then fsck it, 9 times out of ten I get illegal blocks, and once I got
something about a bad filename on /usr/include/linux/[somethingorother].
Basically, the FS is being corrupted in record time.

No errors are thrown in dmesg, no oopses, no nothing.  Just silent
corruption.

Any ideas?

... Adam Conrad

--
backup [n] (bak'up): The duplicate copy of crucial data that no one
 bothered to make; used only in the abstract.

1024D/C6CEA0C9  C8B2 CB3E 3225 49BB 5ED2  0002 BE3C ED47 C6CE A0C9