On the otherhand, upstream doesn't have support for IPv6. Looks like
that was patched in a distro maintainer at some point.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899150
Title:
trickle has
Looks like it's missing some upstream changes, e.g.
https://github.com/mariusae/trickle/commit/bb2825a1fe938e303acd5d65508476ca03961d85
Looks like it's diverged for a long time now... This upstream commit
from 2013 is missing as well:
Just wondering if this will be fixed anytime soon or if I'll continue
doing my own builds for the next while? It's been almost a year now for
a simple build dependency change
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Natty backport to Lucid available in my PPA repo:
https://launchpad.net/~edwin-chiu/+archive/bind9
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to bind9 in Ubuntu.
https://bugs.launchpad.net/bugs/807324
Title:
BIND 9.7.0 (ie., lucid
Natty backport to Lucid available in my PPA repo:
https://launchpad.net/~edwin-chiu/+archive/bind9
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/807324
Title:
BIND 9.7.0 (ie., lucid) is overly
Bunch of domains impacted, I'm shocked a patch hasn't being issued for
Lucid yet...
Some other domains impacted:
webserver.mta.info
www.energy.alberta.ca
www.baixing.com
www.engineering.utoronto.ca
e.newegg.ca
dns1.name-services.com
--
You received this bug notification because you are a member
Bunch of domains impacted, I'm shocked a patch hasn't being issued for
Lucid yet...
Some other domains impacted:
webserver.mta.info
www.energy.alberta.ca
www.baixing.com
www.engineering.utoronto.ca
e.newegg.ca
dns1.name-services.com
--
You received this bug notification because you are a member
More domains impacted:
zidvox.com
log.sv.pandora.tv
n1.pandora.tv
secure.newegg.ca
ns.isipp.com - looks like some DNS servers are impacted as well, so possibly
any domains that use this as their NS record will fail
www.chem-eng.utoronto.ca
cdn.kmplayer.com
ns1.webhostj.com
I suspect this isn't just one bug, but more than one. Everyone has a
different fix for the same problem. For me, ditching Seagate
ST32000542AS for WD20EARS drives fixes my problem.
On Wed, May 11, 2011 at 05:21, Lars 550...@bugs.launchpad.net wrote:
Hi!
I tested with
2.6.32-22-server: same
configure is looking for the parted binary, so I think the missing
dependency is probably the parted package...
On Wed, May 4, 2011 at 09:50, Serge Hallyn
697...@bugs.launchpad.netwrote:
@JamesPage,
exactly which dependency did you add? 'libparted0'? ('libparted0-dev'
is already there, but
configure is looking for the parted binary, so I think the missing
dependency is probably the parted package...
On Wed, May 4, 2011 at 09:50, Serge Hallyn
697...@bugs.launchpad.netwrote:
@JamesPage,
exactly which dependency did you add? 'libparted0'? ('libparted0-dev'
is already there, but
When will this be fixed in maverick and/or natty? The fix is trivial and
reported almost 4 months ago..
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to libvirt in Ubuntu.
https://bugs.launchpad.net/bugs/697046
Title:
Unable to
When will this be fixed in maverick and/or natty? The fix is trivial and
reported almost 4 months ago..
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/697046
Title:
Unable to create volumes on
You could try adding the kernel option: pcie_aspm=off
Didn't work for me though...
On Tue, Apr 5, 2011 at 03:55, IKT noname...@gmail.com wrote:
only have 1 drive,
1 x OCZ 120GB SSD
1 x 1.5TB WD HDD
--
You received this bug notification because you are a direct subscriber
of the bug.
Which one is failing? Same MB? Which WD model?
On Mon, Apr 4, 2011 at 07:58, IKT noname...@gmail.com wrote:
Awesome bug.
Same situation as OP, 120GB SSD drive w/ 1.5TB files drive.
For reference the 1.5TB drive is a western digital.
--
You received this bug notification because you are a
Just the one drive? Strange... I have a WD20EARS drive in the mix, no
problems. Gonna swap out 2 Seagate later with WDs and hopefully my problems
will go away.
Not sure if the MB is entirely at fault. using MHDD, I found several very
slow sectors on the Seagates on another controller (old P4
I suspect the seagates are operating outside of their defined maximum
current draws. I don't really have the equipment to measure this
properly though.
The problem with RAID is that when syncing up writes, a simultaneous
cache flush is done, and this is suspected to cause a spike in power
draw
I'm just speaking from my own experience. I have WD drives in play and they
don't seem impacted at all, only the Seagates. As for other with issues, I
don't recall if those were related or not. With the new PSU, the problem
pretty much went away for almost 2 months, but has since resurfaced. I
Add to your boot params: libata.force=noncq
It's not a guarantee to work, just helps quite a bit.
Also are you running in any sort of RAID configuration? I found a new PSU
helped a little as well, had very few incidents up until yesterday... I
still blame Seagate drives as being part of the
Strange, probably a different issue than mine. Mine seems to have gone
away with a new PSU. Seems like my particular drives Barracuda 2TB LPs
(ST32000542AS) go way above their power draw when flushing their cache.
Normally not an issue, but in a RAID configuration, the simultaneous
flush seems to
So this appears to be happening on Marvell, JMicron and SB700/800 chips,
not good!
What kernel version of archlinux are you running? A possible regression
sounds likely in around the 2.6.33 timeframe?
Another solution i've seen (didn't work for me) was to try pcie_aspm=off
in your boot options.
Tried booting 2.6.31-22-server (from karmic) on a maverick install and
same error. I'm not entirely convinced this is a software bug, seems to
target the same drive. I have 5 identical drives, and switching them
around, so they are on different ports/cables, etc. doesn't seem to make
the problem
Try a different HD, I've had similar issues with my Seagate
ST32000542AS, 3 out of 5 drives died within 4 weeks already prior to
officlal death, I saw similar errors above...
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
Ubuntu 10.10 x64
libvirt-bin:
Installed: 0.8.3-1ubuntu14
Candidate: 0.8.3-1ubuntu14
Error:
r...@codis:/etc/libvirt/storage# virsh vol-create-as --pool sde --name 1
--capacity 100M --allocation 100M
error: Failed to create vol 1
error: internal error ' /dev/sde mkpart
Public bug reported:
Ubuntu 10.10 x64
libvirt-bin:
Installed: 0.8.3-1ubuntu14
Candidate: 0.8.3-1ubuntu14
Error:
r...@codis:/etc/libvirt/storage# virsh vol-create-as --pool sde --name 1
--capacity 100M --allocation 100M
error: Failed to create vol 1
error: internal error ' /dev/sde mkpart
25 matches
Mail list logo