Re: Network Analyzer

2004-07-27 Thread Vijaya S
Martin,
You could try with iptraf or ipfm
Martin Zobel-Helas wrote:

> Hi @ all,
>
> i am looking for network analyzing software which can
>
>   * log connections (Host->Host, Src-Port->Dest-Port), do some statistics on
> that
>
> as well as
>
>   * traffic analyzis like tcpdump or ethereal.
>
> I allready had a look on ntop and nast but both do not match the needed
> criteria.
>
> Any ideas?
>
> Greetings
>   Martin
> --
>   Martin Zobel-Helas [EMAIL PROTECTED]   or   [EMAIL PROTECTED]
>   http://www.helas.net  or  http://mhelas.blogspot.com
>   GPGKey-Fingerprint: 14744CACEF5CECFAE29E2CB17929AB90F7AC3AF0
>
>   
>Name: signature.asc
>signature.asc   Type: application/pgp-signature
> Description: Digital signature


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: q re transferring mysql db from redhat to debian

2004-07-27 Thread Craig Sanders
On Wed, Jul 28, 2004 at 11:39:40AM +1000, Kevin Littlejohn wrote:
> >that's bizarreand could easily lead to a hopelessly corrupted database
> >when other tables refer to that id field.
> >
> >how are you supposed to restore a mysql db from backup then?
> 
> Two answers:
> 
> 1) Why are you relying on the auto_increment field to increment from highest
> point each time?  So long as it gives you a unique value (and it should
> always do that), it shouldn't matter if it's re-using an old value (if it
> does, you shouldn't have deleted the old value...). 

i'm not.  i was just curious.

btw, sometimes it does matter if record ids are re-used.  e.g. one reason not
to re-use id numbers is if it's a search field on a web database.  if someone
bookmarks a particular search (e.g. for id=99) then returning to that bookmark
should either return the same record or it should return "no such record" if it
has been deleted.  it should never return a completely different record.

actually, this is true for any kind of app, not only for web databases.  e.g.
if your sales staff are used to entering product ids from memory, or if your
customers quote their customer ID, this can lead to serious confusion or
problems.  at best, some time will be wasted sorting out the mess.  at worst,
the wrong product may be shipped or the wrong customer may be billedor the
wrong medical records may be referred to when consulting with a patient.

in short, unique IDs need to be unique forever(*), not just unique for the
present moment.

(*) or at least a reasonable facsimile of "forever" :)

> Certinaly, if you're referring to those IDs elsewhere, and you've 
> deleted the record it was referring to, good database design would be to 
> not leave the references lying around, imnsho.

true enough.

more to the point, good database design wouldn't LET you leave them lying
around.  note: i mean database design here, not application design or schema
design.  i mean the database engine itself should not allow this to happen, it
is not something that can or should be left up to the application to enforce,
it has to be enforced by the database engine itself.



> 2) You can set the point to increment from, in a fairly hackish way, by 
> doing a "alter table tbl_name auto_increment = x" where x is the highest 
> number in use.  Requires scripting around your backup/restore process, 
> unfortunately.

no big deal.  some scripting is almost inevitable in database backup and restore.

> With regard 1, the actual definition of auto_increment doesn't preclude
> re-use of numbers as far as I know, so if you're relying on it not to, you've
> got broken code anyway.  That means the mysqldump is doing the correct thing,
> according to spec for auto_increment - there's no requirement in there to
> retain the highest number.  The name of auto_increment is misleading,
> obviously ;)

ok. "works as designed" - it's not an implementation bug it's a design bug :)


> With regard Craig's comment, if your database leaves hanging references to
> non-existant data around, you've got a broken database, whether you've
> realised it yet or not.

true, i didn't think about that at the time.  it was just my initial reaction
to the idea that there was weirdness with restoring a mysql dump.  since
dumping to text (or other re-importable format) is the only good way of backing
up a database, it seems like a major problembeing able to *reliably* backup
and restore a database is, IMO, an essential feature of any database.  you need
to be certain that what you will restore is *identical* to what you backed up.

whether it actually is a major problem or not, i don't know.  that's why i was
asking.  the alter table workaround you mentioned seems reasonable.


OTOH, since mysql doesn't actually do transactions(*) or check referential
integrity, it's quite possible to have such references in the db.  and in this
case, an import like this will convert dangling references which point to
non-existent records into references that point to records that actually exist
(but aren't the right ones).

(*) yes, i know about innodbbut hardly anyone actually uses it because that
means giving up the only feature that mysql users (mistakenly) care about - raw
speed.  not that mysql is actually any faster in the real world with multiple
simultaneous readers and writers, but that's the mythology.

> General note:  We make a policy of using auto_increment _only_ to create
> sequence tables, which we manage ourselves.  This is in line with postgres
> and oracle's use of sequence tables, and makes porting easier.  We don't
> bother with ensuring that the next ID is higher than all previous ones - as
> long as they're unique, that's sufficient, any references to a defunct entry
> are removed when the entry is removed.

postgres sequences (and serial fields) are what i'm used to.

craig

-- 
craig sanders <[EMAIL PROTECTED]>

The next time you vote, remember that "Regime change begins at ho

Re: q re transferring mysql db from redhat to debian

2004-07-27 Thread Kevin Littlejohn
Craig Sanders wrote:
On Tue, Jul 27, 2004 at 09:00:58AM -0400, Fraser Campbell wrote:
On July 27, 2004 03:58 am, Henrik Heil wrote:
The record_ids will stay the same with mysqldump.
What makes you think they will not?
I have seen problems with this.  The existing auto-incremented fields were 
just fine but new ones were a little bit off.  In a normal mysqldb if you 
have a single record with id 1 and delete it then add another record the new 
record will get id 2 (not filling in the missing 1).  I've seen a case that 
after a mysqldump and restore the new records did not honour have that 
behaviour, "missing" ids were reused.  I'm sure that I did something wrong 
with the dump but in that case it was not important so I didn't research it 
further.

that's bizarreand could easily lead to a hopelessly corrupted database when
other tables refer to that id field.
how are you supposed to restore a mysql db from backup then?
Two answers:
1) Why are you relying on the auto_increment field to increment from 
highest point each time?  So long as it gives you a unique value (and it 
should always do that), it shouldn't matter if it's re-using an old 
value (if it does, you shouldn't have deleted the old value...). 
Certinaly, if you're referring to those IDs elsewhere, and you've 
deleted the record it was referring to, good database design would be to 
not leave the references lying around, imnsho.

2) You can set the point to increment from, in a fairly hackish way, by 
doing a "alter table tbl_name auto_increment = x" where x is the highest 
number in use.  Requires scripting around your backup/restore process, 
unfortunately.

With regard 1, the actual definition of auto_increment doesn't preclude 
re-use of numbers as far as I know, so if you're relying on it not to, 
you've got broken code anyway.  That means the mysqldump is doing the 
correct thing, according to spec for auto_increment - there's no 
requirement in there to retain the highest number.  The name of 
auto_increment is misleading, obviously ;)

With regard Craig's comment, if your database leaves hanging references 
to non-existant data around, you've got a broken database, whether 
you've realised it yet or not.

General note:  We make a policy of using auto_increment _only_ to create 
sequence tables, which we manage ourselves.  This is in line with 
postgres and oracle's use of sequence tables, and makes porting easier. 
 We don't bother with ensuring that the next ID is higher than all 
previous ones - as long as they're unique, that's sufficient, any 
references to a defunct entry are removed when the entry is removed.

KJL
--
Kevin Littlejohn
Obsidian Consulting Group
phone: +613 9355 7844
skype: callto://silarsis
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: q re transferring mysql db from redhat to debian

2004-07-27 Thread Craig Sanders
On Tue, Jul 27, 2004 at 09:00:58AM -0400, Fraser Campbell wrote:
> On July 27, 2004 03:58 am, Henrik Heil wrote:
> > The record_ids will stay the same with mysqldump.
> > What makes you think they will not?
> 
> I have seen problems with this.  The existing auto-incremented fields were 
> just fine but new ones were a little bit off.  In a normal mysqldb if you 
> have a single record with id 1 and delete it then add another record the new 
> record will get id 2 (not filling in the missing 1).  I've seen a case that 
> after a mysqldump and restore the new records did not honour have that 
> behaviour, "missing" ids were reused.  I'm sure that I did something wrong 
> with the dump but in that case it was not important so I didn't research it 
> further.

that's bizarreand could easily lead to a hopelessly corrupted database when
other tables refer to that id field.

how are you supposed to restore a mysql db from backup then?


craig

-- 
craig sanders <[EMAIL PROTECTED]>

The next time you vote, remember that "Regime change begins at home"


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Cloning disks with dd and netcat

2004-07-27 Thread Giles Nunn
Hi all,

I tried the dd route to do exactly the same thing. I wanted to recreate
a server or a variation of it quickly and easily. Eventually I gave up
and used systemimager instead. It is quick and simple. It is based on
rsync and it is in woody. I have it working using network boot and it
takes ~3 minutes to boot, partition and install a complete base server -
fully automatically. I am still playing with it as I want to script the
autoconfig of multiple copies of a base server, but it works brilliantly
for a simple clone as it is.

HTH

Giles


On Tue, 2004-07-27 at 11:12, David Ross wrote:
> Hi
> 
> I'm having problems cloning a hard drive. What I want to try do is set
> aside a server on our internal network to store a whole bunch of server
> images. So instead of manually installing a new server and copying
> configs across, we just extract the appropriate image onto a harddrive.
> 
> At the moment I've got an image on the image server but when extracting
> it to the new disk I am getting an error. I created the image by doing
> this:
> 
> dd if=/dev/zero of=empty.tmp bs=1024
> count=FREEBLOCKSCOUNT
> rm empty.tmp
> 
> This cleans up the disk space for each partition apparently for better
> compression, but I'm not sure about the 2nd line and FREEBLOCKSCOUNT.
> After this, which took a while, I issued the following from the image
> server which creates the image:
> 
> nc -v -w 120 -p  -l < /dev/null > image.gz
> 
> Then from the machine we want to clone, I booted with a knoppix cd then
> set it up on the network then did this:
> 
> dd if=/dev/had bs=512 | gzip -c | nc -v -w 60  
> 
> After a few hours, I ended up with image.gz sitting on the image server.
> 
> Now, I've got a new 20Gb ready, slotted it into a new PC and booted with
> trusty knoppix. I then configured her on the int network with a DHCP
> assigned IP address.
> 
> I then tried to cat the image from the image server across the network
> using netcat like the doc said. This is what I did:
> 
> imgserver:/# cat image.gz | nc -v -w 120 -p  -l
> listening on [any]  ...
> 
> Then I did this on the new pc with knoppix booted:
> 
> tty1[/]# nc -v -w 60 XXX.XXX.XXX.XXX  < /dev/null | gzip -dc | dd
> of=/dev/hda bs=512 
> imgserver.whatever.co.za [XXX.XXX.XXX.XXX]  (?) open
> 
> Now it *should* be accepting the image through port  and extracting
> it to /dev/had but after a few seconds the error:
> 
> connect to [XXX.XXX.XXX.XXX] from fw.whatever.co.za [YYY.YYY.YYY.YYY]
> 1026
> too many output retries : Broken pipe
> imgserver:/#
> 
> comes up on the image server and then this:
> 
> hda: read_intr: status=0x59 { DriveReady SeekComplete DataRequest Error
> }
> hda: read_intr: error=0x40 { UncorrectableError }, LBAsect=19528,
> sector=19528
> end_request: I/O error, dev 03:00 (hda), sector 19528
> dd: writing `/dev/hda': Input/output error 
> 19529+0 records in
> 19528+0 records out
> 9998336 bytes transferred in 9.226423 seconds (1083663 bytes/sec) 
> too many output retries : Broken pipe
> [EMAIL PROTECTED]/]#
> 
> comes up on the new pc. Obviously the first thing I did was swap the
> harddrive just in case the one in the new pc is faulty, but I get the
> same error. I also tried to use zcat but no luck.
> 
> The docs I've been using can be found at http://wyae.de/docs/img_dd.php
> 
> Please let me know if I need to supply more info on this or if there is
> anything I've left out. Thanks for your time and effort in advance!
> 
> Thanks
> Dave
-- 

Giles Nunn - ISP Officer
Carms ICT Development Centre
+44 1267 228277



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Cloning disks with dd and netcat

2004-07-27 Thread Leonardo Boselli
Il 27 Jul 2004 alle 18:18 Jan-Benedict Glaw immise in rete
>> Try that with
>> a formerly booting NT system on a NTFS filesystem:)  just copy the
>> "root"... by root i say /dev/hda , the raw partition.  I worked fine
>> for me many times.
> 
> Aren't MS-DOS' io.sys and msdos.sys expected to be in specific areas?

only one of the two, in recent version, anyway as long the disk has the 
same number of sectors and heads they will end on the same place 
--
Leonardo Boselli
Nucleo Informatico e Telematico del Dipartimento Ingegneria Civile
Universita` di Firenze , V. S. Marta 3 - I-50139 Firenze
tel +39 0554796431 cell +39 3488605348 fax +39 055495333
http://www.dicea.unifi.it/~leo


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Cloning disks with dd and netcat

2004-07-27 Thread Jan-Benedict Glaw
On Tue, 2004-07-27 17:52:25 +0200, Leonardo Boselli <[EMAIL PROTECTED]>
wrote in message <[EMAIL PROTECTED]>:
> Il 27 Jul 2004 alle 17:42 Jan-Benedict Glaw immise in rete
> > > I use knoppix to make a cpio image of the 'root filesystem' I'll be
> > > imaging. (eg mounted / /usr /home and /var). Then with another
> > > script
> > Try that with a formerly booting NT system on a NTFS filesystem:)
> 
> just copy the "root"... by root i say /dev/hda , the raw partition.
> I worked fine for me many times.

Aren't MS-DOS' io.sys and msdos.sys expected to be in specific areas?

MfG, JBG

-- 
Jan-Benedict Glaw   [EMAIL PROTECTED]. +49-172-7608481 _ O _
"Eine Freie Meinung in  einem Freien Kopf| Gegen Zensur | Gegen Krieg  _ _ O
 fuer einen Freien Staat voll Freier Bürger" | im Internet! |   im Irak!   O O O
ret = do_actions((curr | FREE_SPEECH) & ~(NEW_COPYRIGHT_LAW | DRM | TCPA));


signature.asc
Description: Digital signature


Re: q re transferring mysql db from redhat to debian

2004-07-27 Thread Henrik Heil
the reason why i don't want to do the database transfer using data
generated by mysqldump is because i want all the auto-generated
record_ids to stay the same in the new system.
The record_ids will stay the same with mysqldump.
What makes you think they will not?
 
I have seen problems with this.  The existing auto-incremented fields were 
just fine but new ones were a little bit off.  In a normal mysqldb if you 
have a single record with id 1 and delete it then add another record the new 
record will get id 2 (not filling in the missing 1).  I've seen a case that 
after a mysqldump and restore the new records did not honour have that 
behaviour, "missing" ids were reused.  I'm sure that I did something wrong 
with the dump but in that case it was not important so I didn't research it 
further.
My bad -- this is indeed a problem using mysqldump. I just checked the 
manpage and it seems that you cannot tell mysqldump to add 
AUTO_INCREMENT=... to the CREATE TABLE statement (please correct me if 
you know a way).

phpmyadmin creates dumps with AUTO_INCREMENT information -- i thought 
mysqldump would do the same -- but it does not.

Best regards,
Henrik
--
Henrik Heil, zweipol Coy & Heil GbR
http://www.zweipol.net/
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Cloning disks with dd and netcat

2004-07-27 Thread Leonardo Boselli
Il 27 Jul 2004 alle 17:42 Jan-Benedict Glaw immise in rete
> > I use knoppix to make a cpio image of the 'root filesystem' I'll be
> > imaging. (eg mounted / /usr /home and /var). Then with another
> > script
> Try that with a formerly booting NT system on a NTFS filesystem:)

just copy the "root"... by root i say /dev/hda , the raw partition.
I worked fine for me many times.
--
Leonardo Boselli
Nucleo Informatico e Telematico del Dipartimento Ingegneria Civile
Universita` di Firenze , V. S. Marta 3 - I-50139 Firenze
tel +39 0554796431 cell +39 3488605348 fax +39 055495333
http://www.dicea.unifi.it/~leo


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Cloning disks with dd and netcat

2004-07-27 Thread Robert Waldner

On Tue, 27 Jul 2004 11:15:54 EDT, "George Georgalis" writes:
>I don't think your system will work though. because you are writing the
>mbr and the partition table with the fs image, you would have to have
>the _exact_ same disk which is less common then you may expect.

If the new disk is _larger or equal_ in size it'll work. Granted, you 
 might lose a bit of space if it's larger.

cheers,
&rw
-- 
/ Ing. Robert Waldner | Security Engineer |  CoreTec IT-Security  \
\   <[EMAIL PROTECTED]>   | T +43 1 503 72 73 | F +43 1 503 72 73 x99 /




pgptq7u2791Om.pgp
Description: PGP signature


Re: Cloning disks with dd and netcat

2004-07-27 Thread Jan-Benedict Glaw
On Tue, 2004-07-27 11:15:54 -0400, George Georgalis <[EMAIL PROTECTED]>
wrote in message <[EMAIL PROTECTED]>:
> On Tue, Jul 27, 2004 at 04:09:04PM +0200, Volker Tanger wrote:
> yes knoppix 2 will save time, you can "su -" from x as well.

...or just switch away from X11 down to one of the text consoles.

> I don't think your system will work though. because you are writing the
> mbr and the partition table with the fs image, you would have to have
> the _exact_ same disk which is less common then you may expect.

CHS values are mostly meaningless today, so for all "modern" software,
it's mostly okay working on an image that was ripped off a different
disk, as long as the source disk is smaller than the target:)

Of course, you loose the additional size of your new disk, if it's
larger. Possibly one can "fix" that by adding additional partitions, but
I've never ever tried that, to be honest:)

> I use knoppix to make a cpio image of the 'root filesystem' I'll be
> imaging. (eg mounted / /usr /home and /var). Then with another script

Try that with a formerly booting NT system on a NTFS filesystem:)

> under knoppix I partition the disk per the application, wget the cpio
> with http/https (maybe with passwd) to stdout and unzip the cpio
> image to the filesystem.  Do a similar procedure to put the right
> kernel/modules on the target, complete with vmlinuz simlink. and run a
> bootloader.

That'd work for Unix systems, and if you used some hacked tar, that
could even work with ACLs. But you'll face a hard time to try to boot
DOS or Windows afterwards (even Linux wouldn't boot, as long as you
didn't write a new boot sektor for it).

Basically, what is what we want to achieve (normally)?

* Prepare a full crash-recovery backup for a machine, while a
  cold spare box (or at least a HDD of same or larger size) is
  available.   A dd-like backup is a cool thing for that,
  mostly independant of the operating system.

* Same as above, but with Linux (or similar) as OS.   A small
  sfdisk input script and tar-like backup may be a lot faster
  than the above, additionally allowing you to easily resize
  partitions. However, you've got to take care about booting the
  box by re-installing the bootloader.

* Simple data backup.   Just use tar/cpio/whatever. Possibly,
  utilities that know about the filesystem (dump, ...) may even
  be faster than accessing all the single files with tar-alikes.

> It is a time consuming to get setup but the process is designed to be
> portable, fast and maintainable. On a fast network the image can be done
> in 5 to 15 minutes.

Right, but leaves you with the problem of making Non-Linux systems
bootable:)

MfG, JBG

-- 
Jan-Benedict Glaw   [EMAIL PROTECTED]. +49-172-7608481 _ O _
"Eine Freie Meinung in  einem Freien Kopf| Gegen Zensur | Gegen Krieg  _ _ O
 fuer einen Freien Staat voll Freier Bürger" | im Internet! |   im Irak!   O O O
ret = do_actions((curr | FREE_SPEECH) & ~(NEW_COPYRIGHT_LAW | DRM | TCPA));


signature.asc
Description: Digital signature


Re: Cloning disks with dd and netcat

2004-07-27 Thread George Georgalis
On Tue, Jul 27, 2004 at 04:09:04PM +0200, Volker Tanger wrote:
>Boot in text mode ("knoppix 2") or Ctrl-Alt-1 from X11 into console. Try
>again then. 

yes knoppix 2 will save time, you can "su -" from x as well.

your problem though is with the fstab knoppix creates, wait you're not
mounting the partition, you're dding it. trust me it's the fstab. change
the options to something you are familiar with, like "defaults" and
you'll be able to write the mbr and partitions.

I don't think your system will work though. because you are writing the
mbr and the partition table with the fs image, you would have to have
the _exact_ same disk which is less common then you may expect.

I use knoppix to make a cpio image of the 'root filesystem' I'll be
imaging. (eg mounted / /usr /home and /var). Then with another script
under knoppix I partition the disk per the application, wget the cpio
with http/https (maybe with passwd) to stdout and unzip the cpio
image to the filesystem.  Do a similar procedure to put the right
kernel/modules on the target, complete with vmlinuz simlink. and run a
bootloader.

It is a time consuming to get setup but the process is designed to be
portable, fast and maintainable. On a fast network the image can be done
in 5 to 15 minutes.

// George


-- 
George Georgalis, Architect and administrator, Linux services. IXOYE
http://galis.org/george/  cell:646-331-2027  mailto:[EMAIL PROTECTED]
Key fingerprint = 5415 2738 61CF 6AE1 E9A7  9EF0 0186 503B 9831 1631


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: recovery from reiser

2004-07-27 Thread Marek L. Kozak
W liście z pon, 26-07-2004, godz. 23:48, Fraser Campbell pisze: 

> You can run "reiserfsck --rebuild-tree -S" on the partition.  Read the 
> manpage, understand what it will do.  Backup the partition first so that if 
> things go wrong you'll have a second chance.
Nice. It works!


> Good luck and make sure you have good backups from now on ;-)
The files were backed up, I bought a brand new, shiny disk to be a
backup disk, but it died unexpectably along with 100GB of data.

I have a question yet:
Since I'll get a new disk, what is better to put on, XFS as I had before
of Reiser4.
The disk is 160GB, 1 partition with data only on it. Data files are big,
very often bigger than 2GB. I don't care about speed but space on it -
XFS filesystem itself took more than 4GB. I think it is less than
Reiser3.6 would take, but haven't dealt with Reiser4 yet.
-- 
Regards,
MareK L. Kozak



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Cloning disks with dd and netcat

2004-07-27 Thread Volker Tanger
Greetings!

> > Do you have any kind of BIOS-configurable write/virus protection
> > for that harddisc switched off? 
> 
> BIOS is ignored nicely once the kernel switched on VM and went into
> protected more...

Yes, I know - but I've encountered hardware where the "100% IDE"
controller could be switched into read-only mode EVEN FOR NON-BIOS
operation. Granted, it was a jumper back then (probably breaker plus
pullup/pulldown for R/W signal line), but that could be done with some
CMOS/Flash setting today, too.

It just strook me odd that root could not write even the first few
bits...

*ahem*

Stop. Different idea. 

@David Ross: you wrote you booted from Knoppix. I hope you did use plain
text mode? If you used the X11/KDE desktop you're usually logged in as
"knoppix" or whatever plain/non-root user. And of course you're not
allowed to (write) access the raw device as ordinary user...

Boot in text mode ("knoppix 2") or Ctrl-Alt-1 from X11 into console. Try
again then. 

If this does not solve the problem, we'll have to search on.

Bye

Volker Tanger
ITK Security


PS: I've updated my docs accordingly - that's an easily overlooked
stuble block.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Cloning disks with dd and netcat

2004-07-27 Thread Jan-Benedict Glaw
On Tue, 2004-07-27 13:13:22 +0200, Volker Tanger <[EMAIL PROTECTED]>
wrote in message <[EMAIL PROTECTED]>:
> On Tue, 27 Jul 2004 12:12:33 +0200 "David Ross" <[EMAIL PROTECTED]>
> wrote:
> [...]
> > Obviously the first thing I did was swap the
> > harddrive just in case the one in the new pc is faulty, but I get the
> > same error. 

> Do you have any kind of BIOS-configurable write/virus protection
> for that harddisc switched off? 

BIOS is ignored nicely once the kernel switched on VM and went into
protected more...

MfG, JBG

-- 
Jan-Benedict Glaw   [EMAIL PROTECTED]. +49-172-7608481 _ O _
"Eine Freie Meinung in  einem Freien Kopf| Gegen Zensur | Gegen Krieg  _ _ O
 fuer einen Freien Staat voll Freier Bürger" | im Internet! |   im Irak!   O O O
ret = do_actions((curr | FREE_SPEECH) & ~(NEW_COPYRIGHT_LAW | DRM | TCPA));


signature.asc
Description: Digital signature


Re: q re transferring mysql db from redhat to debian

2004-07-27 Thread Fraser Campbell
On July 27, 2004 03:58 am, Henrik Heil wrote:

> > the reason why i don't want to do the database transfer using data
> > generated by mysqldump is because i want all the auto-generated
> > record_ids to stay the same in the new system.
>
> The record_ids will stay the same with mysqldump.
> What makes you think they will not?

I have seen problems with this.  The existing auto-incremented fields were 
just fine but new ones were a little bit off.  In a normal mysqldb if you 
have a single record with id 1 and delete it then add another record the new 
record will get id 2 (not filling in the missing 1).  I've seen a case that 
after a mysqldump and restore the new records did not honour have that 
behaviour, "missing" ids were reused.  I'm sure that I did something wrong 
with the dump but in that case it was not important so I didn't research it 
further.
-- 
Fraser Campbell <[EMAIL PROTECTED]> http://www.wehave.net/
Georgetown, Ontario, Canada   Debian GNU/Linux


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: backup DNS question

2004-07-27 Thread Marek Isalski
Kilian Krause writes: 

maybe i can throw in the reference to the "dnstracer" tool aswell.
That's usually pretty handy to check where your bad dns data does come
from. (in case you don't see the results you expected to have)..
Another tool I find incredibly useful is DNS Bajaj: 

http://www.zonecut.net/dns/index.cgi 

Good for checking the health of your DNS. 

--
Marek Isalski
Partner, http://www.faelix.net/ 

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


RE: Cloning disks with dd and netcat

2004-07-27 Thread David Ross
Thanks for the replys so far guys. Yeah I suppose I could RSYNC it
across like Volker said but yeah, I'm trying to do this in the simplest
way, ie no messing around with the bootloader or installing a base
system first. I'm trying again now with a new image...etc

Thanks again, I'll let you guys know how it goes.
Dave



-Original Message-
From: Robert Waldner [mailto:[EMAIL PROTECTED] 
Sent: 27 July 2004 01:35
To: [EMAIL PROTECTED]
Subject: Re: Cloning disks with dd and netcat 


On Tue, 27 Jul 2004 13:13:22 +0200, "Volker Tanger" writes:
>What happens if you do the partitioning manually and image the 
>partitions (/dev/hda1, /dev/hda2, ...) one-by-one instead of the 
>complete disc? Well, doing the partitioning manually, you could RSYNC 
>the server instead of DD+NETCATing, which probably is faster and fails 
>more gracefully.

But would mean mucking around with the bootloader, which usually is the
point for doing _complete_ disc-images.

cheers,
&rw
--
/ Ing. Robert Waldner | Security Engineer |  CoreTec IT-Security  \
\   <[EMAIL PROTECTED]>   | T +43 1 503 72 73 | F +43 1 503 72 73 x99 /





Re: Cloning disks with dd and netcat

2004-07-27 Thread Robert Waldner

On Tue, 27 Jul 2004 14:05:14 +0200, "Volker Tanger" writes:
>True - but DDing a 200GB system disc disc takes quite some time, while
>manually handling partition+mkfs+lilo plus RSYNCing 1.2GB usually is
>LOTS faster...
>
>Upgrading to servers with newer/bigger discs is also less painful than
>with imaging.
>
>But for mostly uniform hardware or testlabs (with frequent system
>bashing) it's the leisure-factor that is heavily in favour of DD images,
>I confess...   ;-)

That's why I often do dump/restore followed by dd'ing the first couple 
 bytes ;)

cheers,
&rw
-- 
/ Ing. Robert Waldner | Security Engineer |  CoreTec IT-Security  \
\   <[EMAIL PROTECTED]>   | T +43 1 503 72 73 | F +43 1 503 72 73 x99 /




pgp9BrqwQxo0N.pgp
Description: PGP signature


Re: Cloning disks with dd and netcat

2004-07-27 Thread Volker Tanger
Greetings!

> >Well, doing the partitioning manually, you could RSYNC
> >the server instead of DD+NETCATing, which probably is faster and
> >fails more gracefully.
> 
> But would mean mucking around with the bootloader, which usually is
> the  point for doing _complete_ disc-images.

True - but DDing a 200GB system disc disc takes quite some time, while
manually handling partition+mkfs+lilo plus RSYNCing 1.2GB usually is
LOTS faster...

Upgrading to servers with newer/bigger discs is also less painful than
with imaging.

But for mostly uniform hardware or testlabs (with frequent system
bashing) it's the leisure-factor that is heavily in favour of DD images,
I confess...   ;-)

Bye

Volker Tanger
ITK Security


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Cloning disks with dd and netcat

2004-07-27 Thread Robert Waldner

On Tue, 27 Jul 2004 13:13:22 +0200, "Volker Tanger" writes:
>What happens if you do the partitioning manually and image the
>partitions (/dev/hda1, /dev/hda2, ...) one-by-one instead of the
>complete disc? Well, doing the partitioning manually, you could RSYNC
>the server instead of DD+NETCATing, which probably is faster and fails
>more gracefully.

But would mean mucking around with the bootloader, which usually is the 
 point for doing _complete_ disc-images.

cheers,
&rw
-- 
/ Ing. Robert Waldner | Security Engineer |  CoreTec IT-Security  \
\   <[EMAIL PROTECTED]>   | T +43 1 503 72 73 | F +43 1 503 72 73 x99 /




pgpmWLH05wE9Q.pgp
Description: PGP signature


Re: Question about drac-add tool and courier

2004-07-27 Thread Tomàs Núñez
Hi again! :)
I wrote the attached email about a month ago. Since then, I've googled a lot, 
I've written to courier list, I've written to drac patch author, and some 
other lists, but I get no answer anywhere... And I don't know what to do :P

I have two separate mail servers (one for incoming and one for outgoing), and 
I'd like to use some "pop-before-smtp". 

I know that almost all email clients support smtp-auth, but users are, say, 
"uncomfortable with changes" (that is, lazy to death), and I need to find the 
easiest way for them.

Since IP-auth is not doing its job (people don't connect through our servers 
to internet, so we can't know what their IP is...), I need to make a change, 
and the best I've found is the "pop-before-smtp" way. Ok, it's not the best 
solution, but it will do exactly what I want to.

Can any of you give some hints on this?

I'm using "Debian Sid" postfix (2.1.4-1), courier-authdaemon, courier-pop, 
courier-imap, courier-ldap (all 0.45.6.20040712-1), drac and drac-dev 
(1.12-2)

I think I should only touch "courier-authdaemon" to make drac work, but now 
I'm lost, and don't know what more to do. You can read the email and that's 
what I did...

So please, any help will be very very appreciated

Thanks a lot
Tomas

El Lunes, 21 de Junio de 2004 14:13, Tomàs Núñez Lirola escribió:
> Hi
> In the DRAC homepage(http://mail.cc.umanitoba.ca/drac/pop.html) I've seen a
> tool to make DRAC work without modifying any courier-pop sources
> (http://mail.cc.umanitoba.ca/drac/courier-exec.txt). I've tried it, but I
> get no response and I don't know where to look at...
>
> I have separate pop and smtp servers, and I've done this:
> On the smtp server (Debian unstable):
>
> apt-get install drac
> host# ps uax|grep drac
> root  5568  0.0  0.0  2388  808 ?S15:36
> 0:00 /usr/sbin/rpc.dracd -i -e 30 /var/lib/drac/dracd.db
>
> On the courier-pop server:
> apt-get install drac-dev
> cc -o drac-add drac-add.c -ldrac
> cp drac-add /usr/lib/courier/authlib/drac-add
>
> And then I added "drac-add" to "/etc/courier/authdaemonrc"  on
> "authmodulelist" like this:
> authmodulelist="authldap authpam drac-add"
>
> I restarted courier-authdaemon and courier-pop, but I see no activity...
>  Where should I see some activity log? I've done "grep -ir drac-add
>  /var/log/*" and I get only these results:
> /var/log/messages:Jun 15 15:42:05 orc authdaemond.ldap: authdaemon:
> modules="authldap authpam drac-add", daemons=5
> /var/log/syslog:Jun 15 15:42:05 orc authdaemond.ldap: authdaemon:
> modules="authldap authpam drac-add", daemons=5
> /var/log/user.log:Jun 15 15:42:05 orc authdaemond.ldap: authdaemon:
> modules="authldap authpam drac-add", daemons=5
>
> So, do any of you think I am doing something wrong? Where can I see what's
> going on?
> How can I see if drac-add is sending rpc-requests correctly, or if drac is
> receiving rpc-requests correctly? Some log
>
> Thanks in advance
> Tomàs



Re: Cloning disks with dd and netcat

2004-07-27 Thread Volker Tanger
Greetings!

On Tue, 27 Jul 2004 12:12:33 +0200 "David Ross" <[EMAIL PROTECTED]>
wrote:
> tty1[/]# nc -v -w 60 XXX.XXX.XXX.XXX  < /dev/null | gzip -dc | dd
> of=/dev/hda bs=512 
> imgserver.whatever.co.za [XXX.XXX.XXX.XXX]  (?) open
[...] 
> hda: read_intr: status=0x59 { DriveReady SeekComplete DataRequest
> Error}
> hda: read_intr: error=0x40 { UncorrectableError }, LBAsect=19528,
> sector=19528
> end_request: I/O error, dev 03:00 (hda), sector 19528
> dd: writing `/dev/hda': Input/output error 
> 19529+0 records in
> 19528+0 records out
> 9998336 bytes transferred in 9.226423 seconds (1083663 bytes/sec) 
> too many output retries : Broken pipe
[...]
> Obviously the first thing I did was swap the
> harddrive just in case the one in the new pc is faulty, but I get the
> same error. 


Obviously the problem is that DD cannot write (for whatever reason) to
/dev/hda - not a single byte.

Do you have any kind of BIOS-configurable write/virus protection
for that harddisc switched off? 

What happens if you do the partitioning manually and image the
partitions (/dev/hda1, /dev/hda2, ...) one-by-one instead of the
complete disc? Well, doing the partitioning manually, you could RSYNC
the server instead of DD+NETCATing, which probably is faster and fails
more gracefully.

Bye

Volker Tanger
ITK Security


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Cloning disks with dd and netcat

2004-07-27 Thread David Ross
Hi

I'm having problems cloning a hard drive. What I want to try do is set
aside a server on our internal network to store a whole bunch of server
images. So instead of manually installing a new server and copying
configs across, we just extract the appropriate image onto a harddrive.

At the moment I've got an image on the image server but when extracting
it to the new disk I am getting an error. I created the image by doing
this:

dd if=/dev/zero of=empty.tmp bs=1024
count=FREEBLOCKSCOUNT
rm empty.tmp

This cleans up the disk space for each partition apparently for better
compression, but I'm not sure about the 2nd line and FREEBLOCKSCOUNT.
After this, which took a while, I issued the following from the image
server which creates the image:

nc -v -w 120 -p  -l < /dev/null > image.gz

Then from the machine we want to clone, I booted with a knoppix cd then
set it up on the network then did this:

dd if=/dev/had bs=512 | gzip -c | nc -v -w 60  

After a few hours, I ended up with image.gz sitting on the image server.

Now, I've got a new 20Gb ready, slotted it into a new PC and booted with
trusty knoppix. I then configured her on the int network with a DHCP
assigned IP address.

I then tried to cat the image from the image server across the network
using netcat like the doc said. This is what I did:

imgserver:/# cat image.gz | nc -v -w 120 -p  -l
listening on [any]  ...

Then I did this on the new pc with knoppix booted:

tty1[/]# nc -v -w 60 XXX.XXX.XXX.XXX  < /dev/null | gzip -dc | dd
of=/dev/hda bs=512 
imgserver.whatever.co.za [XXX.XXX.XXX.XXX]  (?) open

Now it *should* be accepting the image through port  and extracting
it to /dev/had but after a few seconds the error:

connect to [XXX.XXX.XXX.XXX] from fw.whatever.co.za [YYY.YYY.YYY.YYY]
1026
too many output retries : Broken pipe
imgserver:/#

comes up on the image server and then this:

hda: read_intr: status=0x59 { DriveReady SeekComplete DataRequest Error
}
hda: read_intr: error=0x40 { UncorrectableError }, LBAsect=19528,
sector=19528
end_request: I/O error, dev 03:00 (hda), sector 19528
dd: writing `/dev/hda': Input/output error 
19529+0 records in
19528+0 records out
9998336 bytes transferred in 9.226423 seconds (1083663 bytes/sec) 
too many output retries : Broken pipe
[EMAIL PROTECTED]/]#

comes up on the new pc. Obviously the first thing I did was swap the
harddrive just in case the one in the new pc is faulty, but I get the
same error. I also tried to use zcat but no luck.

The docs I've been using can be found at http://wyae.de/docs/img_dd.php

Please let me know if I need to supply more info on this or if there is
anything I've left out. Thanks for your time and effort in advance!

Thanks
Dave



Re: backup DNS question

2004-07-27 Thread Kilian Krause
Hi,

> dig @a.root-servers.net  ns
> 
> You'll get back the response or you'll get a referral to the next level 
> down...

maybe i can throw in the reference to the "dnstracer" tool aswell.
That's usually pretty handy to check where your bad dns data does come
from. (in case you don't see the results you expected to have)..

-- 
Best regards,
 Kilian


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil


Re: q re transferring mysql db from redhat to debian

2004-07-27 Thread Shannon R.
--- Henrik Heil <[EMAIL PROTECTED]> wrote:
> 
> The record_ids will stay the same with mysqldump.
> What makes you think they will not?


Something must have made me think the auto-incremented
ids will be re-generated. My bad.


> 
> you could set up replication to keep the databases
> in sync until you switch to the new server


This sounds really good. I wonder why I never thought
of it. (maybe because i've never tried mysql
replication)

I'm going to try this on a test machine now and see
how stable it is. Thanks for the idea!


Shannon




__
Do you Yahoo!?
Yahoo! Mail Address AutoComplete - You start. We finish.
http://promotions.yahoo.com/new_mail 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: backup DNS question

2004-07-27 Thread Nate Duehr
On Jul 26, 2004, at 10:47 PM, Nate Duehr wrote:
On Jul 26, 2004, at 4:25 PM, Kilian Krause wrote:
Hi Dan,
until your ns1 goes down too, things should go fine. If you think, you
need to worry, watch the load on ns1. If that goes alarmingly up, then
things start going wrong (which is *very* unlikely to happen). If
however you desire to make sure your DNS is safe and accessible even 
in
case ns2 is not restored soon, setup a ns3 and have it listed in whois
and your zonefile.
Just a small technical point on this one... what's in the whois makes 
no difference.  You need to get it into the GTLD servers as an 
A-record (i.e. register it as one of your nameservers) but most 
registrar's whois data lags far behind the GTLD server records.  I can 
understand where your idea comes from that it would have to go in 
whois... registering it as a nameserver means the registrar will 
eventually put it in whois, but DNS resolvers don't look at whois and 
don't care what's in the whois servers, ultimately they look only at 
the GTLD servers.

dig @a.gtld-servers.net  ns
... is the only authoritative way to see what the registrar is handing 
out for your zone after you send in the registration change.  If the 
records have changed/updated there and haven't made it into whois yet, 
it doesn't matter from the perspective of the DNS resolvers out there.

And then there's caching to deal with...
Follow-up for those reading along.  Kilian's talking about a .de domain 
and I (like a bumbling idiot) said "GTLD servers" -- of course, the 
top-level DNS servers for .de are not gtld-servers.net -- sigh.

Kilian and I were talking about this off-line, but just so there's no 
confusion on the list... replace gtld-servers.net with the appropriate 
top-level domain servers for your domain suffix.

Easiest way to figure it out is to start at the root servers and work 
your way down...

dig @a.root-servers.net  ns
You'll get back the response or you'll get a referral to the next level 
down...

Wh DNS.
--
Nate Duehr, [EMAIL PROTECTED]
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: q re transferring mysql db from redhat to debian

2004-07-27 Thread Henrik Heil
Shannon R. wrote:
the reason why i don't want to do the database transfer using data
generated by mysqldump is because i want all the auto-generated record_ids
to stay the same in the new system.
The record_ids will stay the same with mysqldump.
What makes you think they will not?
If you cannot have a downtime or disable write access while migrating 
you could set up replication to keep the databases in sync until you 
switch to the new server (only with mysql version >= 3.23.15).

Best regards,
Henrik
--
Henrik Heil, zweipol Coy & Heil GbR
http://www.zweipol.net/
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Outlook and Qmail

2004-07-27 Thread David Zejda
On Mon, Jul 26, 2004 at 06:05:33PM +0200, David Zejda wrote:
dunno.  large messages obviously aren't the ONLY factor, it's a combination
of factors - one of which is that the message is large.
I have a similar (sometimes, large messages, dialup) problem with OE +
Postfix.

postfix doesn't do POP, that's the job of whatever POP daemon you're using.
Yep, sure, thanks for correction.
I'm using courier IMAP [+POP].
I only posted the message to give a support to assumption, that the 
problem is in O/OE rather then elsewhere (e.g. in Qmail)...

David
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]