Re: [gentoo-user] OpenSSH security

2006-11-07 Thread Jesper Fruergaard Andersen
On Wednesday 08 November 2006 05:52, Mick wrote:
 I use this as it is trivial to edit the sshd port No on
 /etc/ssh/sshd_config and /etc/ssh/ssh_config on the client.  However,
 you need to change the ssh client port back to 22 (or specify it on the
 command line) next time you connect to a production server.

I use different ports for sshd on all my server. You can just add them to 
~/.ssh/config once. It work like /etc/ssh/ssh_config. You can add per 
host settings by doing something like this:

Host Hostname
Port Port


read:
$ man 5 ssh_config

-- 
Jesper
 11:47:46 up 1 day,  3:59,  5 users,  load average: 0.51, 0.66, 0.60
-- 
gentoo-user@gentoo.org mailing list



Re: [gentoo-user] Failed raid

2006-09-25 Thread Jesper Fruergaard Andersen
On Sunday 24 September 2006 03:56, Richard Fish wrote:
 Take a look at one of the good superblocks with mdadm --examine.  That
 should give you an idea of what options to give to create.  You'll
 want to make sure you use the same layout, chunksize, etc...

Luckely I wrote down the exact command I used to create the array so that 
part should be easy.

 You should then be able to recreate the array with mdadm --create
 --assume-clean, and get access to your filesystem and data.

I didn't pay attention to that option before. But now I tried it. The data 
on that array is not that important, most have been backed up elsewhere.
It seemed to work. After recreating the array I ran fsck. The filesystem 
is ext3. It recovered the journal and cleared 5 orphaned inode and the 
filesystem seemed fine. I could mount it and access the data. The array 
synchronized again.
For safety though I might copy the data elsewhere and recreate the 
filesystem from scratch.

-- 
Jesper
 01:17:47 up 17:42, 18 users,  load average: 0.14, 0.19, 0.25
-- 
gentoo-user@gentoo.org mailing list



[gentoo-user] Failed raid

2006-09-23 Thread Jesper Fruergaard Andersen
I have a home server using 2.6.15-vs2.1.0-gentoo-r1 kernel. It has several 
raid arrays. The other day it froze, but I am away, so I had someone else 
turn it off and on again. After that 2 of the arrays didn't come up. One 
of them complained about invalid superblock on one partition but I added 
that back and it synchronized and that work fine again.
However another array, md6, had 2 partitions complaining about invalid 
superblock as shown below. It should use the partitions
/dev/hda8,/dev/hdc6,/dev/hdg7,/dev/hdh6,/dev/hde2
Two missing is one too many for raid 5 to just synchronize again. I have 
looked around but cannot find any information about if or how this can be 
fixed. Is it possible to fix the superblock or force it in some way. I 
guess maybe they may be out of sync so the filesystem could be invalid if 
forced to start anyway.
But maybe it would be possible to force it long enough to mount the 
filesystem readonly for a while to retrieve those things not backed up 
elsewhere and the recreate the array from scratch?


md: Autodetecting RAID arrays.
md: invalid raid superblock magic on hdg7
md: hdg7 has invalid sb, not importing!
md: invalid raid superblock magic on hdh6
md: hdh6 has invalid sb, not importing!
md: autorun ...
md: considering hdh7 ...
...
md: considering hde2 ...
md:  adding hde2 ...
md:  adding hdc6 ...
md: hdc2 has different UUID to hde2
md: hdc1 has different UUID to hde2
md:  adding hda8 ...
md: hda2 has different UUID to hde2
md: hda1 has different UUID to hde2
md: created md6
md: bindhda8
md: bindhdc6
md: bindhde2
md: running: hde2hdc6hda8
md: md6: raid array is not clean -- starting background reconstruction
md: personality 4 is not loaded!
md: do_md_run() returned -22
md: md6 stopped.
md: unbindhde2
md: export_rdev(hde2)
md: unbindhdc6
md: export_rdev(hdc6)
md: unbindhda8
md: export_rdev(hda8)
...


-- 
Jesper
 17:09:48 up 2 days, 59 min, 25 users,  load average: 0.38, 0.88, 0.93
-- 
gentoo-user@gentoo.org mailing list