Hello Brian, my apologies for that ... we are installing using the
18.04.3 server media.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1858161
Title:
Ubuntu 18.04 install not reading block storage p
FYI - both Ubuntu 16.04 and 20.20 daily build show the correct disk
size. See attached from Focal Fossa.
** Attachment added: "20200103-Focal.png"
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+attachment/5317451/+files/20200103-Focal.png
--
You received this bug notification
This screenshot shows what the Ubuntu 18.04.3 installer is showing when
allowing me to select the installation target. The correct value should
be ~1TB, not ~8TB.
** Attachment added: "20200102-Install_Block_Storage_Issue.png"
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+atta
I really don't know how to select the appropriate package, but hdparm
does seem to be confused as to the physical/logical sector size. I don't
know what the installer uses to get the wrong values though.
** Package changed: ubuntu => hdparm (Ubuntu)
--
You received this bug notification because
I should also mention that this server is running on an AMD EPYC Rome
platform, and it seems that the kernel version used during install is a
bit older than it should be to support this.
Any thoughts on where the Ubuntu installation is getting the wrong
physical/logical size?
--
You received thi
This screenshot shows output from hdparm, fdisk and the values in
/sys/block/sda/queue/physical_block_size and logical_block_size.
** Attachment added: "20200103-More_Info.png"
https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/1858161/+attachment/5317437/+files/20200103-More_Info.png
--
Public bug reported:
We recently received Supermicro servers with Avago 3108 chipset, and 2x
Seagate 4K SAS drives in a hardware RAID1 configuration.
All other operating systems (including Ubuntu 16.04) report this virtual
drive properly.
Ubuntu 18.04 shows it as if it were using 512 byte sector
Christian, I couldn't hold back from giving this a try. FYI, it's
working like a dream.
Thanks again!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1738864
Title:
libvirt updates all iSCSI targets
Ha ha -- yes, good point about January!
You are wonderful, thanks so much for your help Christian. We're going
to plan for one VM host to test, with only VMs that are part of a HA
pair (and probably a nice big OpenStack test cluster), and leave the
primary VM on the stock 16.04.3 packages. That wa
Hello Christian, thanks for your quick response.
True, it is quite unfortunate that this problem is just a waiting game,
and not a game-ending issue. And I definitely agree that some end game
here might not be good for a [5] type of result.
Regarding [3], yes I saw that yesterday while searching
Here is the debug log from libvirt. Starting on line 933, you will see
libvirt discovering the targets from the host, and then the next 10,000+
lines are libvirt going through and updating all of the target
information for each of these available targets. Finally, it does what
I would expect it t
Public bug reported:
Hello everyone, I'm a little confused about the behavior of libvirt in
Ubuntu 16.04.3.
We have up to 140 iSCSI targets on a single storage host, and all of
these are made available to our VM hosts. If I stop one of the iSCSI
pools through virsh ("virsh pool-destroy iscsipool1
Hello Stefan,
Yes, now that you mention it, it seems that "Generate from host NUMA
configuration" in 15.04 simply puts everything in node 0. At least,
that's what I'm seeing. While that's a little better than just spanning
the entire guest across both nodes (as a "default"), leaving an entire
se
Thank you for the update Stefan. Yes, I tried to compile and run numad
on Ubuntu but I had no luck there.
Also correct, I am explicitly pinning CPUs at this time. As far as I
can tell, it is the only option.
As far as the big question mark in my head goes ... This function works
as expected on
Err, most importantly, my (selfish) opinion is that something of this
magnitude should not be fixed upstream, but in a current Ubuntu LTS
release. :-)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/144
Yes, so I guess that is the big confusing question. Why does "virsh
nodeinfo" show the right information, but libvirt doesn't/can't use it.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177
Title
My apologies. I meant to say that only libvirt has the issue with
detecting and properly using NUMA nodes. All other NUMA functions with
the system work as expected.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launch
As I mentioned before, we did buy new server hardware to host the VMs.
This hew hardware also has the same NUMA node issue.
I initially installed 14.04.2 on those new servers, then when the NUMA
issue was there, I thought "what the hey" and installed 15.04 just for
fun. The problem was gone -- NU
Why hello Stefan, glad to know you are still with us! :-)
laz@dev-vm0:~$ virsh nodeinfo
CPU model: x86_64
CPU(s): 24
CPU frequency: 1500 MHz
CPU socket(s): 1
Core(s) per socket: 6
Thread(s) per core: 2
NUMA cell(s):2
Memory size: 198069904 KiB
Here's the libvirtd.log. Has same issue with NUMA.
** Attachment added: "20150610-libvirtd.log"
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+attachment/4412639/+files/20150610-libvirtd.log
--
You received this bug notification because you are a member of Ubuntu
Bugs, whic
Ok back to 3.13.0-53-generic kernel, awaiting your command.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177
Title:
Nodeinfo returns wrong NUMA topology / bad virtualization performance
To man
Ah, no prob. Then I will go back to kernel 3.13 and we will go from
there. Thanks Stefan!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177
Title:
Nodeinfo returns wrong NUMA topology / bad v
I will do whatever you think is the best for diagnosing the problem.
So you would like me to go to 12.04 then, yes? Or keep at 14.04.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177
Title:
N
Yes I have a dedicated test environment strictly for this issue. :-)
Would you like me to prepare a 15.04 default install ready for your
updated images?
Much appreciate all of your help Stefan!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
I have tried two things. Upgrading to kernel 3.19.8 and also disabling
Hyper-Threading. Neither has any effect.
Here is attached libvirtd.log, initial part of log is with kernel 3.19.8
which I tried first. Second part of log, which starts at 13:40:28 is
with HT disabled.
** Attachment added: "
Here is some more information attached.
Also, the CPU is actually Intel 6-core with HT so it appears as 12-core.
I can disable HT and see what it reports. Also, running kernel
3.16.0-38-generic.
Back to another piece of interesting information, when I installed
Ubuntu 15.04 (just for fun to see
A much more comprehensive log to show the changes in the log as it goes.
This might be better.
** Attachment added: "libvirtd.log"
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+attachment/4410583/+files/libvirtd.log
--
You received this bug notification because you are a me
Here we go Stefan. The log file is short and sweet -- hopefully it gets
you the information you are looking for!
** Attachment added: "libvirtd.log"
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+attachment/4410572/+files/libvirtd.log
--
You received this bug notification b
Yes, Stefan, I am in the datacenter right now getting the new servers
online. In about a week or so, I will install your new binaries and
post results.
Thank you!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad
Also, in the meantime (if my test server becomes available before the
test packages), I can install upstream libvirt/qemu packages to see if
the fix came from there or if the fix came from elsewhere.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribe
I would be more than happy to test for you Stefan. As long as it is
cookie cutter for a non-guru like myself, you just tell me what to do.
I will have a non-production server ready to rock in roughly a week from
now. Thank you for all of your efforts!
--
You received this bug notification beca
Stefan if you would like to poke around, I have a server we are taking
out of production (also Supermicro) that has this issue as well. I can
provide you access at that time. Possibly 1 week from now.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscrib
We are moving our equipment to the datacenter tomorrow, so won't have
much more input until after then. But all of the links etc are all
where they are expected to be. Not sure if the libvirt user can read
those, but I'm sure it can since my standard user can.
Running a 'find . -name physical_pa
Here we go.
** Attachment added: "sysfscpu.txt"
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1446177/+attachment/4406334/+files/sysfscpu.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs
Hmm, I am having issues uploading files, as well as downloading FliTTi's
to view. Seems to be an issue with the launchpadlibrarian.net?
I can post as plain text if you like. Or I will try uploading
sysfscpu.txt later on.
--
You received this bug notification because you are a member of Ubuntu
Number of cells seems right, but number of sockets is definitely wrong.
OS: Ubuntu 14.04.2 LTS
Kernel: 3.16.0-38-generic
Most updated versions of all related packages as of May 26, 2015.
root@vm0:/media/scripts/vm# virsh capabilities
----0cc47a4c5e42
x86_64
I have this issue as well. This issue has persisted from many Ubuntu
versions ago, and has always made it extremely difficult to deal with
NUMA configuration. Oddly enough, after testing Ubuntu 15.04 last
weekend, it seems the issue is gone.
All of the servers that we have affected by this have
I might add, I am using Intel processors, not AMD.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1446177
Title:
Nodeinfo returns wrong NUMA topology / bad virtualization performance
To manage notif
Can't argue with results, can you now? With my fingers crossed, that's
good enough for me!
I like to stick with the Ubuntu LTS packages, instead of going upstream.
But I think I can now confidently move this into production. :-)
--
You received this bug notification because you are a member of
I stand corrected ... This is something to be addressed within virt-
manager and not libvirt. Using "Copy host CPU configuration" adds
+invtsc (among other things). I mistakenly thought that directive
simply does "-cpu host".
My apologies! Thank you and keep up the good work.
--
You received t
Public bug reported:
Description:Ubuntu 15.04
Release:15.04
libvirt-bin:
Installed: 1.2.12-0ubuntu12
Candidate: 1.2.12-0ubuntu13
Version table:
1.2.12-0ubuntu13 0
500 http://us.archive.ubuntu.com/ubuntu/ vivid-updates/main amd64
Packages
*** 1.2.12-0ubuntu12 0
Perfect! I love the references, thank you so much.
I'm going to call this "as good as it gets" and roll the dice for
testing, and maybe even for production.
I will report if I come across any unexpected issues.
Thanks Wolff.
--
You received this bug notification because you are a member of Ub
Daniel, that's a very interesting fix. I'm not too familiar with any of
the kernel headers (or really any of the source for that matter). Is
the object you are referencing essentially the exact same thing, but
from kernel 3.13 -> 3.16?
If this is so ... Regarding the block I/O function, this wou
This seems to affect 3.16 kernels as far as I can tell.
In the meantime, is there an option for installing and using
iscsitarget? We have a new server going into the datacenter here in a
week or two, and would love to have something in place.
Our current iSCSI targets are running Ubuntu 14.04.1
Public bug reported:
This happened doing "do-release-upgrade -d" from 12.04.4 -> 14.04. I
believe it is from the fact that I am running multiple LDAP databases.
Don't have any other information other than that.
ProblemType: Package
DistroRelease: Ubuntu 14.04
Package: slapd 2.4.28-1.1ubuntu4.4
P
Public bug reported:
Can we update the Ubuntu packages to include Stephan Bosch's update to
client-authenticate.c in managesieve-login?
http://hg.rename-it.nl/dovecot-2.1-pigeonhole/rev/32d178f5e1a2
Certain sieve clients have some issues with the login procedure, and are
unable to complete their
46 matches
Mail list logo