If i have 3 dump(1)-files from a filesystem and restore them via
restore -i -f dump-file, how do I layer them together?
That is, how do I tell restore that I want to restore a level 0, 1, and 2?
Do I run restore -i -f dump-0 and then restore -i -f dump-1 and
then restore -i -f dump-2
On Apr 4, 2008, at 1:59 PM, Paul Schmehl wrote:
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be
better if it will work. I've been reading the man page, but I'm
wondering if anyone is doing this successfully and would like to
share their cmdline.
I
We use the following in a script to backup our servers.
/bin/ssh -q -o 'BatchMode yes' -l user host '/sbin/dump -h 0 -0uf - /home \
| /usr/bin/gzip --fast' 2 /path/to/logs/host/home_full.dump.log
/backups/host_home_full.dump.gz
--On April 4, 2008 12:59:27 PM -0500 Paul Schmehl [EMAIL
Hey,
I'm presently using rsync over ssh, but I think dump would be better if it
will work. I've been reading the man page, but I'm wondering if anyone is
doing this successfully and would like to share their cmdline.
Are doing backups to disk? I find rsync combined with hard links
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if it
will
work. I've been reading the man page, but I'm wondering if anyone is doing
this successfully and would like to share their cmdline.
Hi Paul,
We're not using dump over ssh but I
On Fri, 4 Apr 2008, Paul Schmehl wrote:
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if it
will work. I've been reading the man page, but I'm wondering if anyone is
doing this successfully and would like to share their cmdline.
There's
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if it
will
work. I've been reading the man page, but I'm wondering if anyone is doing
this successfully and would like to share their cmdline.
Hi,
[ from gopher://sdf-eu.org/00/users/mackie/Unix-Notes
On Fri, Apr 4, 2008 at 1:59 PM, Paul Schmehl [EMAIL PROTECTED] wrote:
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if
it will work. I've been reading the man page, but I'm wondering if anyone
is doing this successfully and would like to share
Paul Schmehl wrote:
Has anyone done this?
I'm presently using rsync over ssh, but I think dump would be better if
it will work. I've been reading the man page, but I'm wondering if
anyone is doing this successfully and would like to share their cmdline.
I did this once: http
--On Friday, April 04, 2008 22:21:52 +0200 Peter Boosten [EMAIL PROTECTED]
wrote:
Paul Schmehl wrote:
Has anyone done this?
Little did I know, when I posted this question, that I would receive such a
wealth of information. I'm deeply appreciative of the community's willingness
to
there is a difference between what dump does and what tar/
rsync do... I like the idea of doing a bit level backup, rather than
a file level backup.
If you've never done a dump, try it locally, and then try restore,
particularly interactive restore (restore -i). It's pretty cool and I
don't
contributions.
Now I have some reading to do. :-)
I think there is a difference between what dump does and what tar/
rsync do... I like the idea of doing a bit level backup, rather than
a file level backup.
If you've never done a dump, try it locally, and then try restore
On Fri, 04 Apr 2008 12:59:27 -0500, in sentex.lists.freebsd.questions
you wrote:
Has anyone done this?
Hi,
Yes, we use something like the following
#!/bin/sh
if [ -z $1 ] ; then
echo
echo Usage: $0 backup level
echoSee 'man dump' for more information
Chris Maness wrote:
Chris Maness wrote:
I used to analyze core dumps with tnos to debug. It has been a
coon's age since I've done such. I am having apache crash and core
dump. There isn't any error info in the log files.
Where is the core dumped to (the main log tells me the core has been
I used to analyze core dumps with tnos to debug. It has been a coon's
age since I've done such. I am having apache crash and core dump.
There isn't any error info in the log files.
Where is the core dumped to (the main log tells me the core has been
dumped)?
How do I analyze this dump
Chris Maness wrote:
I used to analyze core dumps with tnos to debug. It has been a coon's
age since I've done such. I am having apache crash and core dump.
There isn't any error info in the log files.
Where is the core dumped to (the main log tells me the core has been
dumped)?
How do I
as /backup
I used 'dump' to backup everything from /store to /backup with the
following command:
dump -0aun -f /backup/fullbackup /store
As expected, the result is a dump file called 'fullbackup'
Then I tested a restore, by restoring the fullbackup file from
/backup to /store. I did
1) made /store pristine: newfs -U /dev/ad4s1e
2) mounted /dev/ad4s1e on /store
3) cd into /store
4) ran the command: restore -r -uv -f /backup/fullbackup
5) remove 'restoresymtable' from /store
Thanks in advance for your help
you did restore as root? (i think so but just for sure)
it is
Wojciech Puchar wrote:
Hi,
I have a box with three hard drives:
/dev/da0 - dedicated to the OS
/dev/ad4s1e - data drive - mounted as /store
/dev/ad5s1e - hold a backup of /dev/ad4 - mounted as /backup
I used 'dump' to backup everything from /store to /backup with the
following command
Hi,
I have a box with three hard drives:
/dev/da0 - dedicated to the OS
/dev/ad4s1e - data drive - mounted as /store
/dev/ad5s1e - hold a backup of /dev/ad4 - mounted as /backup
I used 'dump' to backup everything from /store to /backup with the
following command:
dump -0aun -f /backup
Hi,
Maybe this is a dumb question, but I was wondering if I could use
dump (and restore) on Windows NTFS partitions.
Say I have a NTFS partition, ad0s1. Could I use:
# dump -b 4 -f /backups/winxp.dump /dev/ad0s1
Or after a restore, Windows would be able to read the files? What about dd
Martin Boulianne wrote:
Maybe this is a dumb question, but I was wondering if I could use
dump (and restore) on Windows NTFS partitions.
Say I have a NTFS partition, ad0s1. Could I use:
# dump -b 4 -f /backups/winxp.dump /dev/ad0s1
No. Dump is specific to ufs/ufs2 filesystems
On Wed, Jan 30, 2008 at 09:18:53AM -0500, Martin Boulianne wrote:
Hi,
Maybe this is a dumb question, but I was wondering if I could use
dump (and restore) on Windows NTFS partitions.
Say I have a NTFS partition, ad0s1. Could I use:
# dump -b 4 -f /backups/winxp.dump /dev/ad0s1
Well, I
On Wed, Jan 30, 2008 at 09:18:53AM -0500, Martin Boulianne wrote:
Hi,
Maybe this is a dumb question, but I was wondering if I could use
dump (and restore) on Windows NTFS partitions.
Say I have a NTFS partition, ad0s1. Could I use:
# dump -b 4 -f /backups/winxp.dump /dev/ad0s1
Dump
On Jan 30, 2008 2:08 PM, Roland Smith [EMAIL PROTECTED] wrote:
On Wed, Jan 30, 2008 at 09:18:53AM -0500, Martin Boulianne wrote:
Hi,
Maybe this is a dumb question, but I was wondering if I could use
dump (and restore) on Windows NTFS partitions.
Say I have a NTFS partition, ad0s1. Could
snip
I do not know the syntax either, but it does say that whatever it is
is being deprecated, so I don't imagine the documentation guys will
bother putting it in there...
SC
___
freebsd-questions@freebsd.org mailing list
I'm trying to get a crash dump of a ZFS-related kernel crash, but it
happens before dumpon has run, so i think i need to hardcode the device
in the kernel. However, i can't find the syntax for this. Anyone have
any ideas?
All i've found in the docs is this:
Alternatively, the dump device
Is it possible to dump a file system except for a specified directory?
Or does the dump command require that the WHOLE file system is dumped?
I remember gtar being able to negate archiving specific files.
Chris
___
freebsd-questions@freebsd.org
On Sat, 19 Jan 2008, Chris Maness wrote:
Is it possible to dump a file system except for a specified directory? Or
does the dump command require that the WHOLE file system is dumped? I
remember gtar being able to negate archiving specific files.
See the dump man page for the nodump flag
A program that I use has started giving me this error message when I try
to load it:
Segmentation fault: 11 (core dumped)
Can someone give me a heads up on what's going on here. I've done a
reinstall to no avail.
Rem
___
On 2007-10-31 15:32, Rem P Roberti [EMAIL PROTECTED] wrote:
A program that I use has started giving me this error message when I try
to load it:
Segmentation fault: 11 (core dumped)
Can someone give me a heads up on what's going on here. I've done a
reinstall to no avail.
Is there any
On Wed, Oct 31, 2007 at 03:32:10PM -0700, Rem P Roberti wrote:
A program that I use has started giving me this error message when I try
to load it:
Segmentation fault: 11 (core dumped)
This means that the program has either tried to read from a part of the
memory that it isn't allowed to
On 2007.11.01 00:39:52 +, Roland Smith wrote:
On Wed, Oct 31, 2007 at 03:32:10PM -0700, Rem P Roberti wrote:
A program that I use has started giving me this error message when I try
to load it:
Segmentation fault: 11 (core dumped)
This means that the program has either tried to
On Thu, 1 Nov 2007 00:39:52 +0100
Roland Smith [EMAIL PROTECTED] wrote:
On Wed, Oct 31, 2007 at 03:32:10PM -0700, Rem P Roberti wrote:
A program that I use has started giving me this error message when
I try to load it:
Segmentation fault: 11 (core dumped)
Hey, it's Halloween! ;)
Hi Daemons,
I have been trying to learn this amazing OS in the last few months, it has a
lot of tools which can be useful within my toolbox. I am having some issues
using this tool call Dump, currently located under /sbin/dump, you all know
that. ;-). Well, my goal is to do a dump of file
Hi Daemons,
I have been trying to learn this amazing OS in the last few months, it has a lot of tools
which can be useful within my toolbox. I am having some issues using this tool call
Dump, currently located under /sbin/dump, you all know that. ;-). Well, my
goal is to do a dump of file
Hi,
On 29/08/2007, Wojciech Puchar [EMAIL PROTECTED] wrote:
dump doesn't copy files to files, but files to raw device (partition,
tape, DVD) or to one/few big files.
Dump is used to back up a file system and can write that data to a file. It
doesn't have to write to a raw device.
To dump
+0100From: [EMAIL PROTECTED]: [EMAIL
PROTECTED]: Re: How to use dump?CC: [EMAIL PROTECTED]; [EMAIL PROTECTED],
On 29/08/2007, Wojciech Puchar [EMAIL PROTECTED] wrote:
dump doesn't copy files to files, but files to raw device (partition,tape, DVD)
or to one/few big files.
Dump is used to back up
Colleagues,
Right now I am watching a dump:
[EMAIL PROTECTED] ~] dump -b64 -5Lau /home
DUMP: Connection to big.sibptus.tomsk.ru established.
DUMP: Date of this level 5 dump: Sat Aug 18 14:02:16 2007
DUMP: Date of last level 0 dump: Sun Aug 12 11:10:56 2007
DUMP: Dumping snapshot of /dev
Victor Sudakov wrote:
Colleagues,
Right now I am watching a dump:
[EMAIL PROTECTED] ~] dump -b64 -5Lau /home
DUMP: Connection to big.sibptus.tomsk.ru established.
DUMP: Date of this level 5 dump: Sat Aug 18 14:02:16 2007
DUMP: Date of last level 0 dump: Sun Aug 12 11:10:56 2007
DUMP
Vince wrote:
Right now I am watching a dump:
[EMAIL PROTECTED] ~] dump -b64 -5Lau /home
DUMP: Connection to big.sibptus.tomsk.ru established.
DUMP: Date of this level 5 dump: Sat Aug 18 14:02:16 2007
DUMP: Date of last level 0 dump: Sun Aug 12 11:10:56 2007
DUMP: Dumping snapshot
Hi all,
Can I safely pump a filesystem dump through gzip during the dumping process?,
or di I need to create the dump first then gzip it after?
Does zipping the dumps cause any headaches at restore time?
(I currently dump 5 servers worth of data to a raid 5 array, and am about 20%
away from
On Thursday 16 August 2007, Grant Peel wrote:
Can I safely pump a filesystem dump through gzip during the dumping
process?, or di I need to create the dump first then gzip it after?
I do it all the time: dump -f - ... | gzip date_filesystem.dump.gz
or with bzip2: dump -f - ... | bzip2
On 8/16/07, John Nielsen [EMAIL PROTECTED] wrote:
On Thursday 16 August 2007, Grant Peel wrote:
Can I safely pump a filesystem dump through gzip during the dumping
process?, or di I need to create the dump first then gzip it after?
I do it all the time: dump -f - ... | gzip
On Thu, Aug 16, 2007 at 09:10:12AM -0400, John Nielsen wrote:
On Thursday 16 August 2007, Grant Peel wrote:
Can I safely pump a filesystem dump through gzip during the dumping
process?, or di I need to create the dump first then gzip it after?
I do it all the time: dump -f - ... | gzip
throttling too. I have seen too ADSL modem/routers dropping
high traffic connections. You said you have a cable modem, which does a
much simpler job than an ADSL modem/router, but I wouldn't trust it
anyway... As you said you did manage to get the dump to your computer at
home, so assuming
On Thursday 09 August 2007 17:39, Alex Zbyslaw wrote:
Nikos Vassiliadis wrote:
Keep in mind that dump(8) uses UFS2 snapshots. I don't know
the current status, but in the past, snapshots were not working
that good.
This statement is far too general and IMHO does a disservice to those
who
) dump parts.
Nikos
Thank you for those suggestions, it's appreciated. Although I get the same
results with setting those values both on the server and on the client. SCP
starts full speed, but at 20% of the 200 MB file it starts to stall. All ICMP
traffic was open on both firewalls
. The traceroute from the client
to office/my house is identical until the last but one hop. And I just
succeeded to dump it to my own computer (running Gentoo Linux, I think the
same modem, and a pretty default router in between).
So either the cable modem or the server (running IPF) at the office
On Thursday 09 August 2007 19:51, Bram Schoenmakers wrote:
Op donderdag 09 augustus 2007, schreef Alex Zbyslaw:
Hello,
Bram Schoenmakers wrote:
# /sbin/dump -0uan -L -h 0 -f - / | /usr/bin/bzip2 | /usr/bin/ssh
[EMAIL PROTECTED] \
dd of=/backup/webserver/root.0.bz2
bzip2
Bram Schoenmakers wrote:
If you can write (and compress if short of disk space) the dump
locally and
try an scp to your remote host as Nikos is suggesting, that will narrow
down the problem a bit. Any other large file will do: doesn't have to be a
dump.
As I wrote in my initial mail
Thank you for those suggestions, it's appreciated. Although I get the same
results with setting those values both on the server and on the client. SCP
starts full speed, but at 20% of the 200 MB file it starts to stall. All ICMP
traffic was open on both firewalls at that time.
I had
Dear list,
There is a problem with performing a dump from our webserver at the data
centre to a backup machine at the office. Everytime we try to perform a dump,
the SSH tunnel dies:
# /sbin/dump -0uan -L -h 0 -f - / | /usr/bin/bzip2 | /usr/bin/ssh
[EMAIL PROTECTED] \
dd of=/backup
On Thursday 09 August 2007 11:25, Bram Schoenmakers wrote:
Dear list,
There is a problem with performing a dump from our webserver at the data
centre to a backup machine at the office. Everytime we try to perform a
dump, the SSH tunnel dies:
# /sbin/dump -0uan -L -h 0 -f - / | /usr/bin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Bram Schoenmakers wrote:
Dear list,
There is a problem with performing a dump from our webserver at the data
centre to a backup machine at the office. Everytime we try to perform a dump,
the SSH tunnel dies:
[snip]
* The client (where
On Thu, Aug 09, 2007 at 10:25:41AM +0200, Bram Schoenmakers wrote:
Dear list,
There is a problem with performing a dump from our webserver at the data
centre to a backup machine at the office. Everytime we try to perform a dump,
the SSH tunnel dies:
# /sbin/dump -0uan -L -h 0 -f
out did already
exist). There was a noticable improvement, the /usr dump came much further
than before. But at about 80% there was the timeout again.
I tried lowering the MTU value at the server side, but nearly all other
network traffic stopped working, so that is not the way to go.
Kind regards
for your answer.
I have added the 'pass in for icmp' rule to the firewall (pass out did
already exist). There was a noticable improvement, the /usr dump came
much further than before. But at about 80% there was the timeout again.
Strange, is it possible that the filesystem is corrupted and
dump cannot
Nikos Vassiliadis wrote:
Keep in mind that dump(8) uses UFS2 snapshots. I don't know
the current status, but in the past, snapshots were not working
that good.
This statement is far too general and IMHO does a disservice to those
who worked on snapshots.
There were (and maybe even
Op donderdag 09 augustus 2007, schreef Alex Zbyslaw:
Hello,
Bram Schoenmakers wrote:
# /sbin/dump -0uan -L -h 0 -f - / | /usr/bin/bzip2 | /usr/bin/ssh
[EMAIL PROTECTED] \
dd of=/backup/webserver/root.0.bz2
bzip2 is darned slow and not always much better than gzip -9. It might
On 06/08/07, Victor Sudakov [EMAIL PROTECTED] wrote:
Does nobody know the answer, or am I the only one experiencing the
problem?
I don't know the answer, but I get essentially the
same behaviour. I have never seen any data loss,
I gave an example below. The file
[EMAIL PROTECTED] wrote:
Does nobody know the answer, or am I the only one experiencing the
problem?
I don't know the answer, but I get essentially the
same behaviour. I have never seen any data loss,
I gave an example below. The file wins.dat was not dumped.
in time*. To put it another way: by the time any part
of the system knows the state of any part of the system,
it is wrong.
I would like to point out that I am not saying that
dump does not have a bug, but that this is not
evidence in and of itself for it.
*And no matter what anyone tells you, time
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
i had it too, sometimes even restore is unable to restore well -1-9 dumps
:(
I thought this should
expected next file 12345, got 23456
I'm seeing this too. It's always exactly one inode per file system.
not one, sometimes even tens.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To
On 05/08/07, Victor Sudakov [EMAIL PROTECTED] wrote:
Victor Sudakov wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I thought
[EMAIL PROTECTED] wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I thought this should _never_ happen when dumping a snapshot
On Mon, Aug 06, 2007 at 02:18:57PM +0700, Victor Sudakov wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I thought
On Mon, Aug 06, 2007 at 09:56:15AM +0700, Victor Sudakov wrote:
Victor Sudakov wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I
In response to Jerry McAllister [EMAIL PROTECTED]:
On Mon, Aug 06, 2007 at 09:56:15AM +0700, Victor Sudakov wrote:
Victor Sudakov wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345
the snapshot to fail, which
will cause dump to issue a warning and then continue without making a
snapshot.
It is not likely.
Can you provide the output of dump while doing the dump?
[EMAIL PROTECTED] ~] dump -b64 -0La /var
DUMP: Date of this level 0 dump: Mon Aug 6 22:43:07 2007
DUMP: Date
Jerry McAllister wrote:
Victor Sudakov wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I thought this should _never_
cpghost wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I thought this should _never_ happen when dumping a snapshot
Victor Sudakov wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I thought this should _never_ happen when dumping a snapshot.
What
on using a USB flash drive to do the job and found a
messages from [EMAIL PROTECTED]
(http://www.mail-archive.com/[EMAIL PROTECTED]/msg55434.html )
...
Hello,
The above mentioned web page and script shows a usage of cpio(1)
which I have never seen before:
cpio -dump ${tmpdir}/img
:
cpio -dump ${tmpdir}/img
I was curious, looked into the man page of cpio(1) and even in the
online manual at http://www.gnu.org/software/cpio/manual/cpio.html
but did not saw anything about the option '-dump'; can someone
bring a light to me? Thx
matthias
I think that should be read
Colleagues,
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I thought this should _never_ happen when dumping a snapshot.
What is it?
Thanks
On Tue, Jul 24, 2007 at 06:54:01PM +0700, Victor Sudakov wrote:
Colleagues,
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I'm seeing this too
cpghost wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I'm seeing this too. It's always exactly one inode per file system.
You
Victor Sudakov [EMAIL PROTECTED] writes:
cpghost wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I'm seeing this too. It's always
Lowell Gilbert wrote:
I always use dump -L to dump a live filesystem.
However, when I restore the dump, I sometimes get messages like
foo.txt (inode 12345) not found on tape or
expected next file 12345, got 23456
I'm seeing this too. It's always exactly one inode per file
/saved-entropy.1: (inode 1887715) not found on tape
expected next file 259101, got 11
$ restore -tvf test.dmp | grep 259101
Level 0 dump of / on test.sibptus.tomsk.ru:/dev/ad0s1a
Label: none
leaf259101 ./usr/share/tmac/m.tmac
$ restore -tvf test.dmp | grep 11
Level 0 dump
offsite.
Here are the steps I have for the script.
Is this all I need to do? Do I need any error
handling
logic? THANKS!
# Mount the backup drive:
mount /dev/usb0
# Create the backup:
/sbin/dump -0u -f /dev/usb0 /sambavol
# Unmount the backup drive:
umount /dev/usb0
handling
logic? THANKS!
# Mount the backup drive:
mount /dev/usb0
# Create the backup:
/sbin/dump -0u -f /dev/usb0 /sambavol
# Unmount the backup drive:
umount /dev/usb0
The following is a rough outline of what you should do;
-- shell-script --
#!/bin/sh
Hello guys,
quick question..
Is there a way to tell dump to do it's working without
it asking Is the new volume mounted and ready to go?: (yes or no)
everytime it changes mount points?
For example:
solara# dump -0L -f /dev/da1 /
DUMP: Date of this level 0 dump: Mon Jul 9 02:17:40 2007
DUMP
In the last episode (Jul 09), Dinesh Pandian said:
Hello guys,
quick question..
Is there a way to tell dump to do it's working without
it asking Is the new volume mounted and ready to go?: (yes or no)
everytime it changes mount points?
How else can it tell when you've swapped in new media
On 7/8/07, Dinesh Pandian [EMAIL PROTECTED] wrote:
Hello guys,
quick question..
Is there a way to tell dump to do it's working without
it asking Is the new volume mounted and ready to go?: (yes or no)
everytime it changes mount points?
For example:
solara# dump -0L -f /dev/da1 /
DUMP: Date
Is there a way to tell dump to do it's working without it asking
Is the new volume mounted and ready to go?: (yes or no)
everytime it changes mount points?
How else can it tell when you've swapped in new media? If it
automatically continued it would just overwrite the previous
segment
the backup:
/sbin/dump -0u -f /dev/usb0 /sambavol
# Unmount the backup drive:
umount /dev/usb0
Shape Yahoo! in your own image. Join our Network Research Panel today!
http://surveylink.yahoo.com/gmrs
On Tue, July 3, 2007 02:59, Yong Rao wrote:
Hello,
We have a problem with SMP kernel. It could not dump out core when the
crash happens.
Which version of FreeBSD? -Current?
better ask in [EMAIL PROTECTED] or file a PR.
http://www.freebsd.org/send-pr.html
Rgds,
Patrick
I am able
To: Yong Rao
Cc: [EMAIL PROTECTED]
Subject: Re: SMP options and core dump failure
On Tue, July 3, 2007 02:59, Yong Rao wrote:
Hello,
We have a problem with SMP kernel. It could not dump out core when the
crash happens.
Which version of FreeBSD? -Current?
better ask in [EMAIL PROTECTED] or file
Hello,
We have a problem with SMP kernel. It could not dump out core when the
crash happens.
I am able to isolate the problem to kernel configurations which have SMP
enabled when used with 2 cpus.
With ONE cpu the core dump works ok.
I built the kernel with GENERIC, and deliberately
Hello.
Any one can make anything out of this crash dump?
It's an SMP amd64 6.2 box with a RAID-5 SCSI controller and a couple GiB
of RAM. We are also using GELI.
bye Thanks
av.
--
# kgdb kernel.debug /var/crash
How do you force a memory dump from a specific PID?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]
In the last episode (Jun 04), Sean Murphy said:
How do you force a memory dump from a specific PID?
/usr/bin/gcore
--
Dan Nelson
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman
:
In the last episode (Jun 04), Sean Murphy said:
How do you force a memory dump from a specific PID?
/usr/bin/gcore
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any
wheel 512 Apr 3 17:03 ..
Dan Nelson wrote:
In the last episode (Jun 04), Sean Murphy said:
How do you force a memory dump from a specific PID?
/usr/bin/gcore
gcore is one of the few programs left that still requires procfs.
You'll need to mount it: mount -t procfs /proc /proc
Don't top-post, please.
Sean Murphy [EMAIL PROTECTED] writes:
I get this error when trying gcore what am I doing wrong?
# gcore 581
gcore: /proc/581/file: No such file or directory
# cd /proc
# ls -la
total 4
dr-xr-xr-x 2 root wheel 512 May 8 2005 .
Lowell Gilbert wrote:
Don't top-post, please.
Sean Murphy [1][EMAIL PROTECTED] writes:
I get this error when trying gcore what am I doing wrong?
# gcore 581
gcore: /proc/581/file: No such file or directory
# cd /proc
# ls -la
total 4
dr-xr-xr-x 2 root wheel 512
On Mon, Jun 04, 2007 at 05:08:02PM -0700, Sean Murphy wrote:
Lowell Gilbert wrote:
Don't top-post, please.
Sean Murphy [1][EMAIL PROTECTED] writes:
I get this error when trying gcore what am I doing wrong?
# gcore 581
gcore: /proc/581/file: No such file or directory
401 - 500 of 1069 matches
Mail list logo