Hi Numan,
the security groups were the issue, I deleted them in Openstack, resynced and
recreated them.
Thanks for your help.
best regards,
martin
> Am 21.01.2017 um 18:17 schrieb Numan Siddique :
>
> Looks like the Northbound db is not in sync with neutron db.
> Can you run the command "ov
Hi Numan,
the security groups were the issue, I deleted them in Openstack, resynced and
recreated them.
Thanks for your help.
best regards,
martin
> Am 21.01.2017 um 18:17 schrieb Numan Siddique :
>
> Looks like the Northbound db is not in sync with neutron db.
> Can you run the command "ov
Hi,
I tried to use OVN with Openstack, but I ran into an Issue.
I use the OVN packages from the Canonical Cloud archive:
On the controller node:
ii ovn-central 2.6.0-0ubuntu2~cloud0
amd64OVN central components
ii ovn-common
Hi,
I am writing custom resources for Arista switches and I have an issue with
the hash.merge.
I want to implement logging in a vrf (which is a kind of change root).
The resource is ensurable, if I want to delete a logging entry, I get a
:vrf=>nil in the property_hash.
Prior to the hash merge
The hostsystem is virtualbox, and the guest system is qemu without kvm, because
virtualbox doesn't support hardware acceleration.
The Problem is, that nova-compute generates an invalid xml for this combination.
The offending part is "4".
This part is not accepted from libvirt and an error is logg
The hostsystem is virtualbox, and the guest system is qemu without kvm, because
virtualbox doesn't support hardware acceleration.
The Problem is, that nova-compute generates an invalid xml for this combination.
The offending part is "4".
This part is not accepted from libvirt and an error is logg
@Matt:
in line 359 in the driver.py is the minimum libvirt version definded for which
the numa code is activated.
MIN_LIBVIRT_NUMA_VERSION = (1, 2, 7)
Therfore you did not trigger the behavior with the source code
installation.
--
You received this bug notification because you are a member of
@Matt:
in line 359 in the driver.py is the minimum libvirt version definded for which
the numa code is activated.
MIN_LIBVIRT_NUMA_VERSION = (1, 2, 7)
Therfore you did not trigger the behavior with the source code
installation.
--
You received this bug notification because you are a member of
@Matt V: I hacked an easy place in
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py line 4720.
change
if CONF.libvirt.virt_type not in ['qemu', 'kvm']:
IN
if CONF.libvirt.virt_type not in ['kvm']:
the commit who changed the numa behavior is 945ab28.
I am not sure, does qemu without k
@Matt V: I hacked an easy place in
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py line 4720.
change
if CONF.libvirt.virt_type not in ['qemu', 'kvm']:
IN
if CONF.libvirt.virt_type not in ['kvm']:
the commit who changed the numa behavior is 945ab28.
I am not sure, does qemu without k
Hi Greg,
I took down the interface with "ifconfig p7p1 down".
I attached the config of the first monitor and the first osd.
I created the cluster with ceph-deploy.
The version is ceph version 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82).
On 13.10.2014 21:45, Gregory Farnum wrote:
> How did you
Hi List,
I have a ceph cluster setup with two networks, one for public traffic
and one for cluster traffic.
Network failures in the public network are handled quite well, but
network failures in the cluster network are handled very badly.
I found several discussions on the ml about this topic and
Hi,
I tried 3.10.0-rc7 with the btrfs option skinny extents (btrfstune -x),
I get this warning after a few seconds of ceph workload.
-martin
[ 1153.897960] [ cut here ]
[ 1153.897977] WARNING: at fs/btrfs/backref.c:903
find_parent_nodes+0x107f/0x1090 [btrfs]()
[ 1153.897
Hi List,
if I add my routers gateway to an external network, I get an error in
the l3-agent.log, about a failure in iptables-restore.
As far as I know iptables-restore gets the information on stdin, how
could I see the iptable rules which do not apply?
How could I debug this further?
Full log is a
Hi Josh,
now everything is working, many thanks for your help, great work.
-martin
On 30.05.2013 23:24, Josh Durgin wrote:
>> I have to more things.
>> 1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
>> update your configuration to the new path. What is the new path?
>
> cind
Hi Josh,
now everything is working, many thanks for your help, great work.
-martin
On 30.05.2013 23:24, Josh Durgin wrote:
>> I have to more things.
>> 1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
>> update your configuration to the new path. What is the new path?
>
> cind
--+
root@controller:~/vm_images# rbd -p volumes -l ls
NAME SIZE PARENT FMT PROT LOCK
volume-34838911-6613-4140-93e0-e1565054a2d3 10240M 2
root@controller:~/vm_images#
-martin
On 30.05.2013 22:56, Josh Durgin wrote:
> On 05/30/2013 01:5
--+
root@controller:~/vm_images# rbd -p volumes -l ls
NAME SIZE PARENT FMT PROT LOCK
volume-34838911-6613-4140-93e0-e1565054a2d3 10240M 2
root@controller:~/vm_images#
-martin
On 30.05.2013 22:56, Josh Durgin wrote:
> On 05/30/2013 01:5
publicurl is the ip
for "customer" of the cluster?
Am I wrong here?
-martin
On 30.05.2013 22:22, Martin Mailand wrote:
> Hi Josh,
>
> On 30.05.2013 21:17, Josh Durgin wrote:
>> It's trying to talk to the cinder api, and failing to connect at all.
>> Perhaps there
publicurl is the ip
for "customer" of the cluster?
Am I wrong here?
-martin
On 30.05.2013 22:22, Martin Mailand wrote:
> Hi Josh,
>
> On 30.05.2013 21:17, Josh Durgin wrote:
>> It's trying to talk to the cinder api, and failing to connect at all.
>> Perhaps there
Hi Josh,
On 30.05.2013 21:17, Josh Durgin wrote:
> It's trying to talk to the cinder api, and failing to connect at all.
> Perhaps there's a firewall preventing that on the compute host, or
> it's trying to use the wrong endpoint for cinder (check the keystone
> service and endpoint tables for the
Hi Josh,
On 30.05.2013 21:17, Josh Durgin wrote:
> It's trying to talk to the cinder api, and failing to connect at all.
> Perhaps there's a firewall preventing that on the compute host, or
> it's trying to use the wrong endpoint for cinder (check the keystone
> service and endpoint tables for the
Hi,
telnet is working. But how does nova know where to find the cinder-api?
I have no cinder conf on the compute node, just nova.
telnet 192.168.192.2 8776
Trying 192.168.192.2...
Connected to 192.168.192.2.
Escape character is '^]'.
get
Error response
Error response
Error code 400.
Message: Ba
Hi Weiguo,
my answers are inline.
-martin
On 30.05.2013 21:20, w sun wrote:
> I would suggest on nova compute host (particularly if you have
> separate compute nodes),
>
> (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is
> readable by user nova!!
yes to both
> (2) make sure you can
Hi Weiguo,
my answers are inline.
-martin
On 30.05.2013 21:20, w sun wrote:
> I would suggest on nova compute host (particularly if you have
> separate compute nodes),
>
> (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is
> readable by user nova!!
yes to both
> (2) make sure you can
Hi Josh,
I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without a problem.
But I cannot boot from volumes.
I do
Hi Josh,
I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without a problem.
But I cannot boot from volumes.
I do
en your osd tree has a single rack it should always mark OSDs down after 5
> minutes by default.
>
> David Zafman
> Senior Developer
> http://www.inktank.com
>
>
>
>
> On Apr 25, 2013, at 9:38 AM, Martin Mailand wrote:
>
>> Hi Sage,
>>
>>
Hi Sage,
On 25.04.2013 18:17, Sage Weil wrote:
> What is the output from 'ceph osd tree' and the contents of your
> [mon*] sections of ceph.conf?
>
> Thanks!
> sage
root@store1:~# ceph osd tree
# idweight type name up/down reweight
-1 24 root default
-3 24
Hi Sage,
On 25.04.2013 18:17, Sage Weil wrote:
> What is the output from 'ceph osd tree' and the contents of your
> [mon*] sections of ceph.conf?
>
> Thanks!
> sage
root@store1:~# ceph osd tree
# idweight type name up/down reweight
-1 24 root default
-3 24
Hi Wido,
I did not set the noosdout flag.
-martin
On 25.04.2013 14:56, Wido den Hollander wrote:
> Could you dump your osdmap? The first 10 lines would be interesting.
> There is a flag where you say "noosdout", could it be that the flag is set?
>
> Wido
epoch 206
fsid 13538f8a-a9b5-4f57-ad72
Hi,
if I shutdown an OSD, the OSD gets marked down after 20 seconds, after
300 seconds the osd should get marked out, an the cluster should resync.
But that doesn't happened, the OSD stays in the status down/in forever,
therefore the cluster stays forever degraded.
I can reproduce it with a new in
Hi,
if I shutdown an OSD, the OSD gets marked down after 20 seconds, after
300 seconds the osd should get marked out, an the cluster should resync.
But that doesn't happened, the OSD stays in the status down/in forever,
therefore the cluster stays forever degraded.
I can reproduce it with a new in
Hi Karthik,
are you sure the default gw in the vm is right?
route add default gw 10.0.2.15 br1 is quite unusual.
More likely it will be 10.0.2.1.
Just do a ip route show, before you change to ovs.
-martin
Am 06.04.2013 01:50, schrieb Karthik Sharma:
I have a Windows 7 PC as host.The network
for monitors. I'm assuming that's not the case, but I want to
> make sure my docs are right on this point.
>
>
> On Thu, Mar 28, 2013 at 3:24 PM, Martin Mailand wrote:
>> Hi John,
>>
>> my ceph.conf is a bit further down in this email.
>>
>> -
for monitors. I'm assuming that's not the case, but I want to
> make sure my docs are right on this point.
>
>
> On Thu, Mar 28, 2013 at 3:24 PM, Martin Mailand wrote:
>> Hi John,
>>
>> my ceph.conf is a bit further down in this email.
>>
>> -
789
>
> etc. for monitors. I'm assuming that's not the case, but I want to
> make sure my docs are right on this point.
>
>
> On Thu, Mar 28, 2013 at 3:24 PM, Martin Mailand wrote:
>> Hi John,
>>
>> my ceph.conf is a bit further down in this email.
Hi John,
my ceph.conf is a bit further down in this email.
-martin
Am 28.03.2013 23:21, schrieb John Wilkins:
Martin,
Would you mind posting your Ceph configuration file too? I don't see
any value set for "mon_host": ""
On Thu, Mar 28, 2013 at 1:04 PM, Martin Maila
highlight=admin%20socket#viewing-a-configuration-at-runtime)
> -Greg
>
> On Thu, Mar 28, 2013 at 12:33 PM, Martin Mailand wrote:
>> Hi,
>>
>> I get the same behavior an new created cluster as well, no changes to
>> the cluster config at all.
>> I stop the os
ing-w-out-rebalancing
>
> On Thu, Mar 28, 2013 at 11:12 AM, Martin Mailand wrote:
>> Hi Greg,
>>
>> setting the osd manually out triggered the recovery.
>> But now it is the question, why is the osd not marked out after 300
>> seconds? That's a default
Hi Joao,
thanks for catching that up.
-martin
On 28.03.2013 20:03, Joao Eduardo Luis wrote:
>
> Hi Martin,
>
> As John said in his reply, these should be reported to ceph-devel (CC'ing).
>
> Anyway, this is bug #4519 [1]. It was introduced after 0.58, released
> under 0.59 and is already fix
Hi Joao,
thanks for catching that up.
-martin
On 28.03.2013 20:03, Joao Eduardo Luis wrote:
>
> Hi Martin,
>
> As John said in his reply, these should be reported to ceph-devel (CC'ing).
>
> Anyway, this is bug #4519 [1]. It was introduced after 0.58, released
> under 0.59 and is already fix
aintenance, etc.
>
> http://ceph.com/docs/master/rados/operations/troubleshooting-osd/#stopping-w-out-rebalancing
>
> On Thu, Mar 28, 2013 at 11:12 AM, Martin Mailand wrote:
>> Hi Greg,
>>
>> setting the osd manually out triggered the recovery.
>> But now it i
Hi Greg,
setting the osd manually out triggered the recovery.
But now it is the question, why is the osd not marked out after 300
seconds? That's a default cluster, I use the 0.59 build from your site.
And I didn't change any value, except for the crushmap.
That's my ceph.conf.
-martin
[global]
houldn't be marked out. (ie, setting the 'noout'
> flag). There can also be a bit of flux if your OSDs are reporting an
> unusual number of failures, but you'd have seen failure reports if
> that were going on.
> -Greg
>
> On Thu, Mar 28, 2013 at 10:35 AM, Mart
Hi Greg,
/etc/init.d/ceph stop osd.1
=== osd.1 ===
Stopping Ceph osd.1 on store1...kill 13413...done
root@store1:~# date -R
Thu, 28 Mar 2013 18:22:05 +0100
root@store1:~# ceph -s
health HEALTH_WARN 378 pgs degraded; 378 pgs stuck unclean; recovery
39/904 degraded (4.314%); recovering 15E o/s,
e",
"enter_time": "2013-03-28 12:16:11.049925",
"might_have_unfound": [],
"recovery_progress": { "backfill_target": -1,
"waiting_on_backfill": 0,
"backfill_pos": &qu
uot;ceph osd tree"
> command, all your OSDs are running. You can change the time it takes
> to mark a down OSD out. That's " mon osd down out interval", discussed
> in this section:
> http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#degraded
>
all your OSDs are running. You can change the time it takes
> to mark a down OSD out. That's " mon osd down out interval", discussed
> in this section:
> http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#degraded
>
> On Wed, Mar 27, 2013 at 5:56 P
Hi,
today one of my mons crashed, the log is here.
http://pastebin.com/ugr1fMJR
I think the most important part is:
2013-03-28 01:57:48.564647 7fac6c0ea700 -1
auth/none/AuthNoneServiceHandler.h: In function 'virtual int
AuthNoneServiceHandler::handle_request(ceph::buffer::list::iterator&,
ceph::b
size from this
> though.
>
> Have you followed this procedure to see if your object is getting
> remapped?
> http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#finding-an-object-location
>
> On Thu, Mar 21, 2013 at 12:02 PM, Martin Mailand wrote:
>> Hi,
>
Hi,
I want to change my crushmap to reflect my setup, I have two racks with
each 3 hosts. I want to use for the rbd pool a replication size of 2.
The failure domain should be the rack, so each replica should be in each
rack. That works so far.
But if I shutdown a host the clusters stays degraded,
,
therefore I cannot dump the flow.
-martin
On 05.03.2013 23:22, Aaron Rosen wrote:
> Hi
>
> Response inline:
>
> On Tue, Mar 5, 2013 at 2:00 PM, Martin Mailand wrote:
>> Hi List,
>>
>> I hope this is ok to ask here. I looked through the Code of Quantum and
>> I
Hi List,
I hope this is ok to ask here. I looked through the Code of Quantum and
I don't understand how the gre tunneling is working.
I have two bridges br-int and br-tun, both are connected via a patch
port. But on the br-tun I do have several tunnel, for each tenant one?
How do I direct the traf
Hi Kyle,
it's working now, not sure where the problem in the first place was.
Thanks, for your help.
-martin
On 03.03.2013 03:25, Kyle Mestery (kmestery) wrote:
> On Mar 2, 2013, at 2:29 PM, Martin Mailand wrote:
>> Hi Kyle,
>>
>> I found the patch and applied it.
Hi,
try qemu-img info -f raw rbd:libvirt-pool/gentoo-vm
-martin
On 02.03.2013 02:14, Mr. NPP wrote:
> updated to the latest 56.3, same problem with 56.1, it creates the image
> but i can't seem to view the info
>
> hyp03 ~ # qemu-img create -f rbd rbd:libvirt-pool/gentoo-vm 2G
> Formatting 'rbd
|00010|dpif|WARN|system@ovs-system: failed to add
vx1 as port: Invalid argument
What is wrong? The command or messed I up the kernel module build?
-martin
On 01.03.2013 15:39, Kyle Mestery (kmestery) wrote:
> On Mar 1, 2013, at 8:26 AM, Martin Mailand wrote:
>> Hi Kyle,
>>
&g
Hi Kyle,
thanks for the quick response, where could I find this patch?
-martin
On 01.03.2013 15:18, Kyle Mestery (kmestery) wrote:
> On Mar 1, 2013, at 6:54 AM, Martin Mailand wrote:
>> Hi,
>>
>> are there any updates regarding the patch for VXLAN?
>>
> VXLAN s
Hi,
are there any updates regarding the patch for VXLAN?
-martin
On 22.02.2013 17:48, Jesse Gross wrote:
> On Fri, Feb 22, 2013 at 7:22 AM, Martin Mailand wrote:
>> Hi,
>>
>> after a bit of try and error everything is working now, except for vxlan.
>> I found in the
Hi,
after a bit of try and error everything is working now, except for vxlan.
I found in the devel mail archive, that it is broken at the moment, but
that patches are underway. What is the current state?
-martin
On 20.02.2013 00:13, Jesse Gross wrote:
> On Tue, Feb 19, 2013 at 1:04 PM, Mar
Hi,
I compiled a kernel in it, that was working. And configure is checking
for the header files, isn't it? In the configure is no error.
What else should I do in the Linux tree?
-martin
Am 19.02.2013 21:54, schrieb Jesse Gross:
On Tue, Feb 19, 2013 at 12:50 PM, Martin Mailand wrote
rtin
Am 19.02.2013 21:40, schrieb Jesse Gross:
On Tue, Feb 19, 2013 at 11:41 AM, Martin Mailand wrote:
Hi List,
I am trying to build an openvswitch kernel module fir 3.8.0, but I am
failing with this error.
What do I wrong?
Is the current git compatible with 3.8.0?
I just checked and it
Hi List,
I am trying to build an openvswitch kernel module fir 3.8.0, but I am
failing with this error.
What do I wrong?
Is the current git compatible with 3.8.0?
log:
make -C /lib/modules/3.8.0/build
M=/home/martin/git/openvswitch/datapath/linux modules
make[4]: Betrete Verzeichnis '/home/marti
1.4.0-rc1-vdsp1.0
qemu utilities
-martin
On 14.02.2013 18:18, Sage Weil wrote:
> Hi Martin-
>
> On Thu, 14 Feb 2013, Martin Mailand wrote:
>> Hi List,
>>
>> I get reproducible this assertion, how can I help to debug it?
>
> Can you describe the workload? Are the
Hi List,
I get reproducible this assertion, how can I help to debug it?
-martin
(Lese Datenbank ... 52246 Dateien und Verzeichnisse sind derzeit
installiert.)
Vorbereitung zum Ersetzen von linux-firmware 1.79 (durch
.../linux-firmware_1.79.1_all.deb) ...
Ersatz für linux-firmware wird entpackt
Hi List,
I tried to use Openvswitch to bond to interface together. The setup
should use the aktiv/passiv bond modus (mode 1).
But if I pull eth0 I lose connection to the machine, there is no
failover. If I pull eht1 connection stay available.
I configured ovs like this. What do I do wrong?
ovs-v
good question, probably we do not have enough experience with IPoIB.
But it looks good on paper, so it's definitely a try worth.
-martin
Am 07.11.2012 23:28, schrieb Gandalf Corvotempesta:
2012/11/7 Martin Mailand :
I tested a Arista 7150S-24, a HP5900 and in a few weeks I will get a
Mel
Hi,
I *think* the HP is Broadcom based, the Arista is Fulcrum based, and I
don't know which chips Mellanox is using.
Our NOC tested both of them, an the Arista was the clear winner, at
least in our workload.
-martin
Am 07.11.2012 22:59, schrieb Stefan Priebe:
HP told me they all use the s
07.11.2012 22:35, schrieb Martin Mailand:
Hi,
I tested a Arista 7150S-24, a HP5900 and in a few weeks I will get a
Mellanox MSX1016. ATM the Arista is may favourite.
For the dual 10GeB NICs I tested the Intel X520-DA2 and the Mellanox
ConnectX-3. My favourite is the Intel X520-DA2.
That's p
Corvotempesta:
2012/11/7 Martin Mailand :
I have 16 SAS disk on a LSI 9266-8i and 4 Intel 520 SSD on a HBA, the node
has dual 10G Ethernet. The clients are 4 nodes with dual 10GeB, as test I
use rados bench on each client. The aggregated write speed is around 1,6GB/s
with single replication.
Just for
Hi,
I have 16 SAS disk on a LSI 9266-8i and 4 Intel 520 SSD on a HBA, the
node has dual 10G Ethernet. The clients are 4 nodes with dual 10GeB, as
test I use rados bench on each client. The aggregated write speed is
around 1,6GB/s with single replication.
In the first configuration, I had the
controller with lots of expanders, I've
noticed high IO wait times, especially when doing lots of small writes.
Mark
On 10/15/2012 11:12 AM, Martin Mailand wrote:
Hi,
inspired from the performance test Mark did, I tried to compile my own
one.
I have four OSD processes on one Node, each proc
Hi,
inspired from the performance test Mark did, I tried to compile my own one.
I have four OSD processes on one Node, each process has a Intel 710 SSD
for its journal and 4 SAS Disk via an Lsi 9266-8i in Raid 0.
If I test the SSD with fio they are quite fast and the w_wait time is
quite low.
B
Hi,
whilst testing the new rbd layering feature I found a problem with rbd
map. It seems rbd map doesn't support the new format.
-martin
ceph -v
ceph version 0.51-265-gc7d11cd
(commit:c7d11cd7b813a47167108c160358f70ec1aab7d6)
rbd create --size 10 --new-format new
rbd map new
add fail
Hi Sage,
is in this release the rbd layering/cloning already testable?
Do you have a link to the docs how to use it?
Best Regards,
martin
Am 26.08.2012 17:58, schrieb Sage Weil:
The latest development release v0.51 is ready. Notable changes include:
* crush: tunables documented; feature b
Hi Wido,
until recently there were still a few bugs in btrfs which could be hit
quite easily with ceph. The last big one was fixed here
http://www.spinics.net/lists/ceph-devel/msg06270.html
I am running a ceph cluster with btrfs on a 3.5-rc2 without a problem,
even under heavy test load.
Ho
Hi,
what's up locked, unlocked, unlocking?
-martin
Am 16.06.2012 17:11, schrieb Sage Weil:
On Fri, 15 Jun 2012, Yehuda Sadeh wrote:
On Fri, Jun 15, 2012 at 5:46 PM, Sage Weil wrote:
Looks good! Couple small things:
$ rbd unpreserve pool/image@snap
Is 'preserve' and 'unpreserve' the
Hi,
the ceph cluster is running under heavy load for the last 13 hours
without a problem, dmesg is empty and the performance is good.
-martin
Am 23.05.2012 21:12, schrieb Martin Mailand:
this patch is running for 3 hours without a Bug and without the Warning.
I will let it run overnight and
Hi,
the ceph cluster is running under heavy load for the last 13 hours
without a problem, dmesg is empty and the performance is good.
-martin
Am 23.05.2012 21:12, schrieb Martin Mailand:
this patch is running for 3 hours without a Bug and without the Warning.
I will let it run overnight and
Hi Josef,
this patch is running for 3 hours without a Bug and without the Warning.
I will let it run overnight and report tomorrow.
It looks very good ;-)
-martin
Am 23.05.2012 17:02, schrieb Josef Bacik:
Ok give this a shot, it should do it. Thanks,
--
To unsubscribe from this list: send th
Hi Josef,
this patch is running for 3 hours without a Bug and without the Warning.
I will let it run overnight and report tomorrow.
It looks very good ;-)
-martin
Am 23.05.2012 17:02, schrieb Josef Bacik:
Ok give this a shot, it should do it. Thanks,
--
To unsubscribe from this list: send th
Hi Josef,
now I get
[ 2081.142669] couldn't find orphan item for 2039, nlink 1, root 269,
root being deleted no
-martin
Am 18.05.2012 21:01, schrieb Josef Bacik:
*sigh* ok try this, hopefully it will point me in the right direction. Thanks,
[ 126.389847] Btrfs loaded
[ 126.390284] devi
Hi Josef,
now I get
[ 2081.142669] couldn't find orphan item for 2039, nlink 1, root 269,
root being deleted no
-martin
Am 18.05.2012 21:01, schrieb Josef Bacik:
*sigh* ok try this, hopefully it will point me in the right direction. Thanks,
[ 126.389847] Btrfs loaded
[ 126.390284] devi
Hi Josef,
there was one line before the bug.
[ 995.725105] couldn't find orphan item for 524
Am 18.05.2012 16:48, schrieb Josef Bacik:
Ok hopefully this will print something out that makes sense. Thanks,
-martin
[ 241.754693] Btrfs loaded
[ 241.755148] device fsid 43c4ebd9-3824-4b07-a71
Hi Josef,
there was one line before the bug.
[ 995.725105] couldn't find orphan item for 524
Am 18.05.2012 16:48, schrieb Josef Bacik:
Ok hopefully this will print something out that makes sense. Thanks,
-martin
[ 241.754693] Btrfs loaded
[ 241.755148] device fsid 43c4ebd9-3824-4b07-a71
Hi Josef,
I hit exact the same bug as Christian with your last patch.
-martin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Josef,
I hit exact the same bug as Christian with your last patch.
-martin
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Josef,
no there was nothing above. Here the is another dmesg output.
Was there anything above those messages? There should have been a WARN_ON() or
something. If not thats fine, I just need to know one way or the other so I can
figure out what to do next. Thanks,
Josef
-martin
[ 63.0
Hi Josef,
no there was nothing above. Here the is another dmesg output.
Was there anything above those messages? There should have been a WARN_ON() or
something. If not thats fine, I just need to know one way or the other so I can
figure out what to do next. Thanks,
Josef
-martin
[ 63.0
Hi Josef,
somehow I still get the kernel Bug messages, I used your patch from the
16th against rc7.
-martin
Am 16.05.2012 21:20, schrieb Josef Bacik:
Hrm ok so I finally got some time to try and debug it and let the test run a
good long while (5 hours almost) and I couldn't hit either the or
Hi Josef,
somehow I still get the kernel Bug messages, I used your patch from the
16th against rc7.
-martin
Am 16.05.2012 21:20, schrieb Josef Bacik:
Hrm ok so I finally got some time to try and debug it and let the test run a
good long while (5 hours almost) and I couldn't hit either the or
Hi,
I got the same Warning but triggered it differently, I created a new
cephfs on top of btrfs via mkcephfs, the command than hangs.
[ 100.643838] Btrfs loaded
[ 100.644313] device fsid 49b89a47-76a0-45cf-9e4a-a7e1f4c64bb8 devid 1
transid 4 /dev/sdc
[ 100.645523] btrfs: setting nodatacow
Hi Josef,
Am 11.05.2012 21:16, schrieb Josef Bacik:
Heh duh, sorry, try this one instead. Thanks,
With this patch I got this Bug:
[ 8233.828722] [ cut here ]
[ 8233.828737] kernel BUG at fs/btrfs/inode.c:2217!
[ 8233.828746] invalid opcode: [#1] SMP
[ 8233.828761
Hi Josef,
Am 11.05.2012 21:16, schrieb Josef Bacik:
Heh duh, sorry, try this one instead. Thanks,
With this patch I got this Bug:
[ 8233.828722] [ cut here ]
[ 8233.828737] kernel BUG at fs/btrfs/inode.c:2217!
[ 8233.828746] invalid opcode: [#1] SMP
[ 8233.828761
Hi Josef,
Am 11.05.2012 15:31, schrieb Josef Bacik:
That previous patch was against btrfs-next, this patch is against 3.4-rc6 if you
are on mainline. Thanks,
I tried your patch against mainline, after a few minutes I hit this bug.
[ 1078.523655] [ cut here ]
[ 1078.52
Hi Josef,
Am 11.05.2012 15:31, schrieb Josef Bacik:
That previous patch was against btrfs-next, this patch is against 3.4-rc6 if you
are on mainline. Thanks,
I tried your patch against mainline, after a few minutes I hit this bug.
[ 1078.523655] [ cut here ]
[ 1078.52
Hi,
Am 24.04.2012 18:31, schrieb João Eduardo Luís:
What kernel and btrfs versions are you using?
Kernel:3.4.0-rc3
btrfs-tools 0.19+20100601-3ubuntu3
That's how I created the fs.
mkfs.btrfs -n 32k -l 32k /dev/sd{c,d,e,f}
-martin
--
To unsubscribe from this list: send the
Hi,
Am 24.04.2012 17:23, schrieb João Eduardo Luís:
Any chance you could run iotop during the busy periods and tell us which
processes are issuing the io?
sure,
http://85.214.49.87/ceph/iotop.txt
-martin
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of
Hi,
I have a strange behavior on the osd, the cluster is a two node system,
on one machine 50 qemu/rbd vm's are running (idling) the other machine
is a osd with four osd processes and one mon processes.
The osd disk are as follow
sda is root
sdb is journal four partitions
sd{c,d,e,f) each thr
Hi List,
is it possible to quiesce the disk before a snapshot? Or does it make no
sense with rbd?
How about the new rbd_cache, does it get flushed before the snapshot?
I would like to use it like this.
virsh snapshot-create --quiesce $DOMAIN
-martin
--
To unsubscribe from this list: send the
1 - 100 of 207 matches
Mail list logo