[Group.of.nepali.translators] [Bug 1668639] Re: Add a trigger to reload rsyslog when a new configuration file is dropped in /etc/rsyslog.d

2017-08-31 Thread Frode Nordahl
Marking Won't fix for Yakkety as it is EOL.

Removing sts-sru-needed tag until patches are ready.

** Changed in: rsyslog (Ubuntu Yakkety)
   Status: New => Won't Fix

** Tags removed: sts-sru-needed

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1668639

Title:
  Add a trigger to reload rsyslog when a new configuration file is
  dropped in /etc/rsyslog.d

Status in rsyslog package in Ubuntu:
  Fix Released
Status in rsyslog source package in Trusty:
  New
Status in rsyslog source package in Xenial:
  New
Status in rsyslog source package in Yakkety:
  Won't Fix
Status in rsyslog source package in Zesty:
  New
Status in rsyslog source package in Artful:
  Fix Released
Status in rsyslog package in Debian:
  Fix Released

Bug description:
  [Impact]
  Servers or cloud instances will not log important messages after initial 
deployment. Manual reboot or restart of services is necessary to get expected 
behaviour.

  [Test Case]
  1) Install, enable and start haproxy
  2) Observe that /etc/rsyslog.d/49-haproxy.conf is installed
  3) Observe that /var/lib/haproxy/dev/log and /var/log/haproxy.log is NOT 
created
  4) Restart rsyslog service
  5) Observe that /var/lib/haproxy/dev/log and /var/log/haproxy.log IS created
  6) Restart haproxy service and observe that log now is filled with entries

  With the patched deb steps 3,4 and 6 becomes irrelevant and everything
  works out of the box.

  [Regression Potential]
  Minimal.

  This patch merges a patch from Debian where a trigger is added to the
  rsyslog package that fires when other debs drop files into
  /etc/rsyslog.d.

  
  [Original Bug Description]
  rsyslog should reload its configuration when other packages drop 
configuration in /etc/rsyslog.d

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=791337

  https://anonscm.debian.org/cgit/collab-
  maint/rsyslog.git/commit/?id=8d4074003f8fb19dae07c59dd19f0540a639210f

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/1668639/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1666827] Re: Backport fixes for Rename Network return 403 Error

2017-03-10 Thread Frode Nordahl
** Changed in: horizon
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1666827

Title:
  Backport fixes for Rename Network return 403 Error

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  New
Status in horizon source package in Trusty:
  Invalid
Status in horizon source package in Xenial:
  Triaged
Status in horizon source package in Yakkety:
  Fix Released

Bug description:
  [Impact]
  Non-admin users are not allowed to change the name of a network using the 
OpenStack Dashboard GUI

  [Test Case]
  1. Deploy trusty-mitaka or xenial-mitaka OpenStack Cloud
  2. Create demo project
  3. Create demo user
  4. Log into OpenStack Dashboard using demo user
  5. Go to Project -> Network and create a network
  6. Go to Project -> Network and Edit the just created network
  7. Change the name and click Save
  8. Observe that your request is denied with an error message

  [Regression Potential]
  Minimal.

  We are adding a patch already merged into upstream stable/mitaka for
  the horizon call to policy_check before sending request to Neutron
  when updating networks.

  The addition of rule "update_network:shared" to horizon's copy of
  Neutron policy.json is our own due to upstream not willing to back-
  port this required change. This rule is not referenced anywhere else
  in the code base so it will not affect other policy_check calls.

  Upstream bug: https://bugs.launchpad.net/horizon/+bug/1609467

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1666827/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1666827] Re: Backport fixes for Rename Network return 403 Error

2017-03-03 Thread Frode Nordahl
** Patch added: "fix-dashboard-change-network-name-policy.patch"
   
https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1666827/+attachment/4830680/+files/fix-dashboard-change-network-name-policy.patch

** Changed in: horizon (Ubuntu Trusty)
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1666827

Title:
  Backport fixes for Rename Network return 403 Error

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in OpenStack Dashboard (Horizon):
  New
Status in horizon package in Ubuntu:
  New
Status in horizon source package in Trusty:
  Invalid
Status in horizon source package in Xenial:
  Triaged
Status in horizon source package in Yakkety:
  Fix Released

Bug description:
  [Impact]
  Non-admin users are not allowed to change the name of a network using the 
OpenStack Dashboard GUI

  [Test Case]
  1. Deploy trusty-mitaka or xenial-mitaka OpenStack Cloud
  2. Create demo project
  3. Create demo user
  4. Log into OpenStack Dashboard using demo user
  5. Go to Project -> Network and create a network
  6. Go to Project -> Network and Edit the just created network
  7. Change the name and click Save
  8. Observe that your request is denied with an error message

  [Regression Potential]
  Minimal.

  We are adding a patch already merged into upstream stable/mitaka for
  the horizon call to policy_check before sending request to Neutron
  when updating networks.

  The addition of rule "update_network:shared" to horizon's copy of
  Neutron policy.json is our own due to upstream not willing to back-
  port this required change. This rule is not referenced anywhere else
  in the code base so it will not affect other policy_check calls.

  Upstream bug: https://bugs.launchpad.net/horizon/+bug/1609467

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1666827/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1628750] Re: Please backport fixes from 10.2.3 and tip for RadosGW

2017-02-28 Thread Frode Nordahl
** Tags removed: verification-mitaka-needed
** Tags added: verification-mitaka-done

** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1628750

Title:
  Please backport fixes from 10.2.3 and tip for RadosGW

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in ceph package in Ubuntu:
  Fix Released
Status in ceph source package in Xenial:
  Fix Released
Status in ceph source package in Yakkety:
  Fix Released

Bug description:
  [Impact] 
  In ceph deployments with large numbers of objects (typically generated by use 
of radosgw for object storage), during recovery options when servers or disks 
fail, it quite possible for OSD recovering data to hit their suicide timeout 
and shutdown because of the number of objects each was trying to recover in a 
single chuck between heartbeats.  As a result, clusters go read-only due to 
data availability.

  [Test Case]
  Non-trivial to reproduce - see original bug report.

  [Regression Potential] 
  Medium; the fix for this problem is to reduce the number of operations per 
chunk to 64000, limiting the chance that an OSD will not heatbeat and suicide 
itself as a result.  This is configurable so can be tuned on a per environment 
basis. 

  The patch has been accepted into the Ceph master branch, but is not
  currently targetted as a stable fix for Jewel.

  >> Original Bug Report <<

  We've run into significant issues with RadosGW at scale; we have a
  customer who has ½ billion objects in ~20Tb of data and whenever they
  lose an OSD for whatever reason, even for a very short period of time,
  ceph was taking hours and hours to recover.  The whole time it was
  recovering requests to RadosGW were hanging.

  I ended up cherry picking 3 patches; 2 from 10.2.3 and one from trunk:

* d/p/fix-pg-temp.patch: cherry pick 
56bbcb1aa11a2beb951de396b0de9e3373d91c57 from jewel.
* d/p/only-update-up_thru-if-newer.patch: 
6554d462059b68ab983c0c8355c465e98ca45440 from jewel.
* d/p/limit-omap-data-in-push-op.patch: 
38609de1ec5281602d925d20c392ba4094fdf9d3 from master.

  The 2 from 10.2.3 are because pg_temp was implicated in one of the
  longer outages we had. 

  The last one is what I think actually got us to a point where ceph was
  stable and I found it via the following URL chain:

  
http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2016-June/010230.html
  -> http://tracker.ceph.com/issues/16128
  -> https://github.com/ceph/ceph/pull/9894
  -> 
https://github.com/ceph/ceph/commit/38609de1ec5281602d925d20c392ba4094fdf9d3

  With these 3 patches applied the customer has been stable for 4 days
  now but I've yet to restart the entire cluster (only the stuck OSDs)
  so it's hard to be completely sure that all our issues are resolved
  but also which of the patches fixed things.

  I've attached the debdiff I used for reference.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1628750/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1655421] Re: nouveau driver being too conservative about HDMI pixel clock, barring usage of native resolution on 4K displays over HDMI

2017-01-30 Thread Frode Nordahl
This patch is already in Yakkety kernel. It will also be released as a
part of the Xenial HWE kernel in Ubuntu Xenial 16.04.2.

Marking this bug as Invalid for Yakkety and Xenial and direct interested
parties to the Xenial HWE kernel.

Prior to its release in February 2017 it can be tested by installing the
hwe-edge packages. Have a look at this page for more information:
https://wiki.ubuntu.com/Kernel/RollingLTSEnablementStack


** Changed in: linux (Ubuntu Yakkety)
   Status: In Progress => Invalid

** Changed in: linux (Ubuntu Xenial)
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1655421

Title:
  nouveau driver being too conservative about HDMI pixel clock, barring
  usage of native resolution on 4K displays over HDMI

Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  Invalid
Status in linux source package in Yakkety:
  Invalid
Status in linux source package in Zesty:
  Fix Released

Bug description:
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993311] 
[drm:drm_add_edid_modes.part.22 [drm]] HDMI: DVI dual 0, max TMDS clock 30 
kHz
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993315] nouveau :01:00.0: 
DRM: native mode from largest: 1920x1080@60
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993342] 
[drm:drm_mode_debug_printmodeline [drm]] Modeline 69:"3840x2160" 30 297000 3840 
4016 4104 4400 2160 2168 2178 2250 0x48 0x5
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993348] 
[drm:drm_mode_prune_invalid [drm]] Not using 3840x2160 mode: CLOCK_HIGH
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993353] 
[drm:drm_mode_debug_printmodeline [drm]] Modeline 73:"2560x1440" 60 241500 2560 
2608 2640 2720 1440 1443 1448 1481 0x40 0x9
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993358] 
[drm:drm_mode_prune_invalid [drm]] Not using 2560x1440 mode: CLOCK_HIGH
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993363] 
[drm:drm_mode_debug_printmodeline [drm]] Modeline 104:"3840x2160" 25 297000 
3840 4896 4984 5280 2160 2168 2178 2250 0x40 0x5
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993368] 
[drm:drm_mode_prune_invalid [drm]] Not using 3840x2160 mode: CLOCK_HIGH
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993373] 
[drm:drm_mode_debug_printmodeline [drm]] Modeline 105:"3840x2160" 24 297000 
3840 5116 5204 5500 2160 2168 2178 2250 0x40 0x5
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993378] 
[drm:drm_mode_prune_invalid [drm]] Not using 3840x2160 mode: CLOCK_HIGH
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993383] 
[drm:drm_mode_debug_printmodeline [drm]] Modeline 106:"3840x2160" 30 296703 
3840 4016 4104 4400 2160 2168 2178 2250 0x40 0x5
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993387] 
[drm:drm_mode_prune_invalid [drm]] Not using 3840x2160 mode: CLOCK_HIGH
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993392] 
[drm:drm_mode_debug_printmodeline [drm]] Modeline 116:"3840x2160" 24 296703 
3840 5116 5204 5500 2160 2168 2178 2250 0x40 0x5
  Jan 10 18:56:03 frode-MacBookPro kernel: [4.993397] 
[drm:drm_mode_prune_invalid [drm]] Not using 3840x2160 mode: CLOCK_HIGH

  However it is perfectly valid to use higher pixel clock on some
  graphic cards.

  This patch will allow the end-user to adjust the pixel clock with the hdmimhz 
kernel parameter until the detection code gets good enough:
  
https://github.com/torvalds/linux/commit/1a0c96c075bb4517d4ce4fb6750ee0a3cf38714c

  This page contains a lot of useful information and lead me to a sollution to 
my problem:
  https://www.elstel.org/software/hunt-for-4K-UHD-2160p.html.en

  I think this is worthy of consideration for backport to the Xenial kernel as 
not being able to use the most recent LTS version of Ubuntu Desktop with 
current hardware is a drag.
  --- 
  ApportVersion: 2.20.1-0ubuntu2.4
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC1:  frode  2797 F pulseaudio
   /dev/snd/controlC0:  frode  2797 F pulseaudio
  CurrentDesktop: Unity
  DistroRelease: Ubuntu 16.04
  HibernationDevice: RESUME=UUID=ae0b4ef2-57b4-4ab5-ac55-be113c10f1d1
  InstallationDate: Installed on 2017-01-08 (2 days ago)
  InstallationMedia: Ubuntu 16.04.1 LTS "Xenial Xerus" - Release amd64 
(20160719)
  MachineType: Apple Inc. MacBookPro10,1
  NonfreeKernelModules: wl
  Package: linux-image-extra-4.4.0-59-generic 4.4.0-59.80
  PackageArchitecture: amd64
  ProcFB:
   0 inteldrmfb
   1 nouveaufb
  ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.4.0-59-generic.efi.signed 
root=/dev/mapper/ubuntu--vg-root ro quiet splash hdmimhz=297 vt.handoff=7
  ProcVersionSignature: Ubuntu 4.4.0-59.80-generic 4.4.35
  RelatedPackageVersions:
   linux-restricted-modules-4.4.0-59-generic N/A
   linux-backports-modules-4.4.0-59-generic  N/A
   

[Group.of.nepali.translators] [Bug 1587261] Re: [SRU] Swift bucket X-Timestamp not set by Rados Gateway

2016-10-27 Thread Frode Nordahl
** Changed in: ceph (Ubuntu Zesty)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1587261

Title:
  [SRU] Swift bucket X-Timestamp not set by Rados Gateway

Status in Ubuntu Cloud Archive:
  New
Status in ceph package in Ubuntu:
  Fix Released
Status in ceph source package in Xenial:
  New
Status in ceph source package in Yakkety:
  New
Status in ceph source package in Zesty:
  Fix Released

Bug description:
  [Impact]

   * A basic characteristic of a object store is the ability to create
     buckets and objects and to query for information about said
     buckets and objects.

   * In the current version of the ceph radosgw package it is not
     possible to get creation time for buckets. This is a serious
     defect and makes it impossible to use Ubuntu with ceph as a
     object store for some applications.

   * The issue has been fixed in upstream master

   * The proposed debdiff solves the issue by including patches cherry
     picked and adapted from upstream master branch fixing this issue.

  [Test Case]

   * Use Juju to deploy Ceph cluster with radosgw and relation to OpenStack
     Keystone. Example bundle: http://pastebin.ubuntu.com/23374308/

   * Install OpenStack Swift client

  sudo apt-get install python-swiftclient

   * Load OpenStack Credentials pointing to your test deployment

  wget 
https://raw.githubusercontent.com/openstack-charmers/openstack-bundles/master/development/shared/novarc
  . novarc

   * Create swift bucket

  swift post test

   * Display information about newly created bucket

  swift stat test

   * Observe that key 'X-Timestamp' has value 0.0

   * Delete bucket

  swift delete test

   * Install patched radosgw packages on 'ceph-radosgw' unit and repeat

   * Verify that key 'X-Timestamp' now has a value > 0.0 and corrensponding
 to the unixtime of when you created the bucket.

  [Regression Potential]

   * The patch is simple and I see little potential for any regression as a
     result of it being applied.

  [Original bug description]
  When creating a swift/radosgw bucket in horizon the bucket gets created, but 
shows up with a creation date of 19700101

  In the apache log one can observe

  curl -i http://10.11.140.241:80/swift/v1/bucket1 -I -H "X-Auth-Token:  ...
  Container HEAD failed: http://10.11.140.241:80/swift/v1/bucket1 404 Not Found

  However a manual curl call succeeds. Also the radosgw.log shows
  successful PUT/GET requests.

  I get similar results using the swift command line utility with
  containers inheriting a creation date of 19700101 even though I can
  see the correct date being passed to rados in the headers of the
  request.

  Also similarly issues with ceilometer intergration, similarly linked:

  2016-05-31 06:28:16.931 1117922 WARNING ceilometer.agent.manager [-] Continue 
after error from storage.containers.objects: Account GET failed: 
http://10.101.140.241:80/swift/v1/AUTH_025d6aa2af18415a87c012211edb7fea?format=json
 404 Not Found  [first 60 chars of response] 
{"Code":"NoSuchBucket","BucketName":"AUTH_025d6aa2af18415a87
  2016-05-31 06:28:16.931 1117922 ERROR ceilometer.agent.manager Traceback 
(most recent call last):

  This is using charm version: 86 against Openstack Mitaka

  This also seems pretty reproduceable with any ceph, ceph-rados and
  mitaka install via the juju charms.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1587261/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp