On 04.06.2012 04:59, Liu Bo wrote:
On 06/04/2012 10:18 AM, Maxim Mikheev wrote:
Hi Liu,
1) all of them not working (see dmesg at the end)
2)
max@s0:~$ sudo btrfs scrub start /dev/sdb
ERROR: getting dev info for scrub failed: Inappropriate ioctl for device
max@s0:~$ sudo btrfs scrub start
How can I mount it at the first?
On 06/04/2012 04:18 AM, Arne Jansen wrote:
On 04.06.2012 04:59, Liu Bo wrote:
On 06/04/2012 10:18 AM, Maxim Mikheev wrote:
Hi Liu,
1) all of them not working (see dmesg at the end)
2)
max@s0:~$ sudo btrfs scrub start /dev/sdb
ERROR: getting dev info for
On 04.06.2012 13:30, Maxim Mikheev wrote:
How can I mount it at the first?
Let me state it differently: If you can't mount it, you can't scrub it.
On 06/04/2012 04:18 AM, Arne Jansen wrote:
On 04.06.2012 04:59, Liu Bo wrote:
On 06/04/2012 10:18 AM, Maxim Mikheev wrote:
Hi Liu,
1) all
Hi Arne,
Can you advice how can I recover data?
I tried almost everything what I found on https://btrfs.wiki.kernel.org
/btrfs-restore restored some files but it is not what was stored.
I have seen this command
--
In case of a corrupted
On Mon, Jun 04, 2012 at 08:01:32AM -0400, Maxim Mikheev wrote:
Thank you for helping.
I'm not sure I can be of much help, but there were a few things
missing from the earlier conversation that I wanted to check the
details of.
~$ uname -a
Linux s0 3.4.0-030400-generic #201205210521 SMP Mon
What have you done? Why do you need to recover data? What happened? A
power failure? A kernel crash?
On Tue, 29 May 2012 18:14:53 -0400, Maxim Mikheev wrote:
I recently decided to use btrfs. It works perfectly for a week even
under heavy load. Yesterday I destroyed backups as cannot afford to
It was a kernel panic from btrfs.
I had around 40 parallel processes of reading/writing.
On 06/04/2012 08:24 AM, Stefan Behrens wrote:
What have you done? Why do you need to recover data? What happened? A
power failure? A kernel crash?
On Tue, 29 May 2012 18:14:53 -0400, Maxim Mikheev wrote:
adding -v, as an example:
sudo btrfs-find-root -v -v -v -v -v /dev/sdb
didn't change output at all.
On 06/04/2012 08:11 AM, Hugo Mills wrote:
On Mon, Jun 04, 2012 at 08:01:32AM -0400, Maxim Mikheev wrote:
Thank you for helping.
I'm not sure I can be of much help, but there were a few
By the way, If data will be recovered I can easily reproduce crash
situation. So it can be real-life heavy load test
On 06/04/2012 08:24 AM, Stefan Behrens wrote:
What have you done? Why do you need to recover data? What happened? A
power failure? A kernel crash?
On Tue, 29 May 2012
[trimmed Arne Jan from cc by request]
On Mon, Jun 04, 2012 at 08:28:22AM -0400, Maxim Mikheev wrote:
adding -v, as an example:
sudo btrfs-find-root -v -v -v -v -v /dev/sdb
didn't change output at all.
OK, then all I can suggest is what I said below -- work through the
potential tree
I used only one volume.
I will work through your suggestion.
Is any other options here?
On 06/04/2012 08:34 AM, Hugo Mills wrote:
[trimmed Arne Jan from cc by request]
On Mon, Jun 04, 2012 at 08:28:22AM -0400, Maxim Mikheev wrote:
adding -v, as an example:
sudo btrfs-find-root -v -v -v -v
Hi Jan, Alex,
I have seen some discussions about btrfs send/receive functionality
being developed by you. I have also been interested in this. I spent
some time coding a prototype doing something like Alex described in
http://www.spinics.net/lists/linux-btrfs/msg16175.html, i.e., walking
over FS
On 04.06.2012 14:39, Alex Lyakas wrote:
# How does one track changes in generic INODE_ITEM properties, like
mode or uid/gid? Whenever such property gets changed, INODE_ITEM
gets stamped with a new transid, but do we need to compare it with the
previous version on the receive side to realize
On Mon, 04 Jun 2012 08:26:43 -0400, Maxim Mikheev wrote:
It was a kernel panic from btrfs.
I had around 40 parallel processes of reading/writing.
Do you have a stack trace for this kernel panic, something with the term
BUG, WARNING and/or Call Trace in /var/log/kern.log or
/var/log/syslog (or
On Mon, Jun 4, 2012 at 2:39 PM, Alex Lyakas
alex.bolshoy.bt...@gmail.com wrote:
Hi Jan, Alex,
I have seen some discussions about btrfs send/receive functionality
being developed by you. I have also been interested in this. I spent
some time coding a prototype doing something like Alex
On Mon, Jun 04, 2012 at 07:43:40AM -0400, Maxim Mikheev wrote:
alternate copy you wish to use. In the following example we ask for
using the superblock copy #2 of /dev/sda7:
# ./btrfsck -s 2 /dev/sd7
-
but it gave me:
$ sudo btrfsck -s 2 /dev/sdb
After looking on Kernel.log, looks like I had raid card failure and data
was not stored properly on one of disks (/dev/sde).
Btrfs didn't recognized disk failure and keep trying to write data until
reboot.
Some other tests after reboot shows that /dev/sde has generation 9095
and other 4 disks
On Mon, 04 Jun 2012 10:08:54 -0400, Maxim Mikheev wrote:
Disks were connected to RocketRaid 2760 directly as JBOD.
There is no LVM, MD or encryption. I used plain disks directly.
The file system was 55% full (1.7TB from 3TB for each disk).
Logs are attached.
The error happens at May 29,
Can I roll back to 9095, as all disks has 9095?
How can I send this file to the mailing list?
On 06/04/2012 11:02 AM, Stefan Behrens wrote:
On Mon, 04 Jun 2012 10:08:54 -0400, Maxim Mikheev wrote:
Disks were connected to RocketRaid 2760 directly as JBOD.
There is no LVM, MD or encryption. I
Hi Alex, Jan,
I was also interested in send/receive semantics was thinking that if
we adhere to the semantics as in
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg07482.html
of:
it is impossible to track the deleted items (files,dirs, eXtended
attributes). I can develop a command
On Mon, 04 Jun 2012 11:08:36 -0400, Maxim Mikheev wrote:
How can I send this file to the mailing list?
Using web space, e.g. http://pastebin.com/
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at
pastebin.com has limit 500K
I put file here: http://www.4shared.com/archive/I8cU3K43/kernlog1.html?
On 06/04/2012 11:11 AM, Stefan Behrens wrote:
On Mon, 04 Jun 2012 11:08:36 -0400, Maxim Mikheev wrote:
How can I send this file to the mailing list?
Using web space, e.g. http://pastebin.com/
On Mon, Jun 4, 2012 at 5:10 PM, shyam btrfs shyam.bt...@gmail.com wrote:
Hi Alex, Jan,
I was also interested in send/receive semantics was thinking that if
we adhere to the semantics as in
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg07482.html
of:
it is impossible to track
Hi Arne,
On Mon, Jun 4, 2012 at 4:01 PM, Arne Jansen sensi...@gmx.net wrote:
On 04.06.2012 14:39, Alex Lyakas wrote:
# How does one track changes in generic INODE_ITEM properties, like
mode or uid/gid? Whenever such property gets changed, INODE_ITEM
gets stamped with a new transid, but do
I run through all potential tree roots. It gave me everytime messages
like these:
parent transid verify failed on 3405159735296 wanted 9096 found 5263
parent transid verify failed on 3405159735296 wanted 9096 found 5263
parent transid verify failed on 3405159735296 wanted 9096 found 5263
parent
--super works but my root tree 2 has many errors too.
What can I do next?
Thanks
On 06/04/2012 10:54 AM, Ryan C. Underwood wrote:
On Mon, Jun 04, 2012 at 07:43:40AM -0400, Maxim Mikheev wrote:
alternate copy you wish to use. In the following example we ask for
using the superblock copy #2 of
On Mon, Jun 04, 2012 at 06:04:22PM +0100, Hugo Mills wrote:
I'm out of ideas.
... but that's not to say that someone else may have some ideas. I
wouldn't get your hopes up too much, though.
At this point, though, you're probably looking at somebody writing
custom code to scan the FS
On Mon, Jun 4, 2012 at 6:33 PM, Alexander Block abloc...@googlemail.com wrote:
On Mon, Jun 4, 2012 at 5:10 PM, shyam btrfs shyam.bt...@gmail.com wrote:
Hi Alex, Jan,
I was also interested in send/receive semantics was thinking that if
we adhere to the semantics as in
Is any chance to fix it and recover data after such failure?
On 06/04/2012 11:02 AM, Stefan Behrens wrote:
On Mon, 04 Jun 2012 10:08:54 -0400, Maxim Mikheev wrote:
Disks were connected to RocketRaid 2760 directly as JBOD.
There is no LVM, MD or encryption. I used plain disks directly.
The
If he has it in a RAID 1, could he manually fail the bad disk and try
it from there? Obviously this could be harmful, so a dd copy would be
a VERY good idea(truthfully, that should have been the first thing
that was done).
Michael
On Mon, Jun 4, 2012 at 12:09 PM, Hugo Mills h...@carfax.org.uk
It was a RAID0 unfortunately.
On 06/04/2012 02:02 PM, Michael wrote:
If he has it in a RAID 1, could he manually fail the bad disk and try
it from there? Obviously this could be harmful, so a dd copy would be
a VERY good idea(truthfully, that should have been the first thing
that was done).
On Fri, Jun 01, 2012 at 09:55:51AM -0400, Josef Bacik wrote:
In doing my enospc work I would sometimes error out in btrfs_save_ino_cache
which would abort the transaction but we'd still end up with a corrupted
file system. This is because we don't actually check the return value and
so if
On Mon, Jun 04, 2012 at 05:02:26PM +0200, Stefan Behrens wrote:
According to the kern.1.log file that you have sent (which is not
visible on the mailing list because it exceeded the 100,000 chars limit
of vger.kernel.org), a rebalance operation was active when the disks or
the RAID
Below is what you used? So you have RAID 0 for data, RAID 1 for
metadata. This doesn't help any, but a point of info.
# Create a filesystem across four drives (metadata mirrored, data striped)
mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde
Just to make sure I understand correctly: This FS with
I hit a problem on my laptop, I had about 40GB free, and I screwed up a
36GB virtualbox image.
No biggie, I have netapp style snapshots, so I deleted my messed up VM
image, and figured I only had to copy the last image from my hourly
snapshot.
First I though, I sure would be nice if I could take
On 05/06/12 13:01, Marc MERLIN wrote:
First I though, I sure would be nice if I could take btrfs to reference
the same blocks from the snapshot to my current image.
But, --reflink failed across devices nodes, so I was forced to
copy/duplicate the blocks (36GB).
Patches for this were posted
36 matches
Mail list logo