Re: [arch-general] A few out of date packages

2019-05-19 Thread Anatol Pomozov via arch-general
Hello

On Sun, May 19, 2019 at 3:08 PM Genes Lists via arch-general
 wrote:
>
> Hi - First a thank you for all the great work managing and keeping
> packages up to date. It can a significant amount of work.
>
> That said, periodically I check the repos for out of date packages. I've
> selected a few to highlight based on age and my own view of importance
> (no claim its a good view).
>
> There are also some packages which have been languishing in testing for
> a while (not sure the reason - I chose to ignore those here.).
>
> Here's a few that might benefit from an update:
>
> RepoCurrent
> Package VersDateVersDateAge
> --  --  
> dkms2.5 201711302.7.1   20190512528
> usbutils0.10201805150.1220190507357
> refind-efi  11.32018072211.420181112113
> efibootmgr  16  2018040917  20180610 62
> autofs  5.1.4   201712195.1.5   20181030315
> lsof4.91201804044.93.2  20190508399

Where did you find lsof version 4.93.2?

The official website does not mention current version. And the mirror
Arch uses hosts 4.91 only ftp://ftp.fu-berlin.de/pub/unix/tools/lsof/

> cifs-utils  6.8 201803136.9 20190405388
> chrony  3.4 201809193.5 20190514237
> Docbook-xml 4.5 5.1 ?
> alsa-lib1.1.8   201901071.1.9   20190510123
> Alsa-plugins1.1.8   201901071.1.9   20190510123
> Alsa-utils  1.1.8   201901071.1.9   20190510123

alsa packages version 1.1.9 are in [testing] for a week now. They will
be moved to stable in a few days.

> gradle  5.2.1   201902085.4.1   20190426 77
>
>
> Hope this is helpful, thank you!

Thank you for the feedback! Really appreciate it.


Re: [arch-general] Ruby 2.6 is in [testing]

2019-01-09 Thread Anatol Pomozov via arch-general
Hi

Forgot to mention. From packaging prospective the main change is that
irb tool been moved to a separate package called 'ruby-irb'. It was
done to reflect the fact that irb development moved from the main ruby
repo to its own repo https://github.com/ruby/irb


[arch-general] Ruby 2.6 is in [testing]

2019-01-09 Thread Anatol Pomozov via arch-general
Happy New Year folks.

Just a quick headsup. ruby 2.6 rebuild just entered [testing] repository.

If you see any issues with the ruby packages please report either to
Arch or upstream developers. Happy testing folks.


Re: [arch-general] syslinux: out of date - or not?

2018-12-21 Thread Anatol Pomozov via arch-general
Hello


On Fri, Dec 21, 2018 at 3:34 AM Bjoern Franke  wrote:
>
> Hi,
>
> I recently wanted to switch from grub to syslinux, but it could not boot
> my /boot-partition, because it uses XFS.
>
> Unfortunately only syslinux 6.04 supports XFS, while we stick on 6.03.
> 6.04 is somehow a "testing" version, thought it has been out for 2
> years, so I marked 6.03 as "out of date".
>
> I'm wondering a bit why we stick on 6.03, even Debian stable[1] has 6.04.

It is the question you should really ask upstream developers. If the
project is stable enough for Debian why they do not rollout a new
release? Having project in usable-but-not-released state is confusing.

Anyway, an alpha version of 6.04 is pushed to [testing]. The best
thing one can do is to test it and make sure all the use-cases you
need (e.g. XFS) work correctly. Happy holidays and happy testing.


Re: [arch-general] Issue linking 32-bit GAS assembly

2018-10-17 Thread Anatol Pomozov via arch-general
Hello
On Wed, Oct 17, 2018 at 12:37 PM Dutch Ingraham  wrote:
>
> Hi all:
>
> I'm having a problem linking 32-bit GAS assembly code that references external
> C libraries on an up-to-date 64-bit Arch installation.
>
> I have enabled the 32-bit repositories and installed the multilib-devel group
> and lib32-glibc. I have updated the library cache with ldconfig and rebooted.
>
> The command to assemble is: .  Assembly
> succeeds.
>
> However, when linking using the command
>  filename.o>
>
> the command fails with:
> ld: skipping incompatible /usr/lib/libc.so when searching for -lc
> ld: skipping incompatible /usr/lib/libc.a when searching for -lc
> ld: cannot find -lc
>
> Note this linker command (or a hierarchy-appropriate one) seems standard, and
> succeeds on a 64-bit Debian OS.
>
> There is a fairly recent Forum question (but pre-dating the discontinuance of
> i686 support) regarding the same issue at
> https://bbs.archlinux.org/viewtopic.php?id=229235 which indicates the command
> should be:
>
>  filename filename.o -lc>
>
> This command succeeds, insofar as the linker returns an exit code of 0. 
> However,
> running the program (and all other similar programs) fails with "Illegal
> instruction (core dumped)." Assembling with debugging symbols and running a
> backtrace didn't enlighten me.
>
> Note that changing /lib/ld-linux to /usr/lib32/ld-linux in the command 
> immediately
> above produces the same result, as confirmed by the output of 
>
> Anyone see something I've missed of have any suggestions?  Thanks.
>
>
> ---
>
> Here is some simple code to test with:
>
>
> # paramtest2.s - Listing system environment variables
> .section .data
> output:
>.asciz "%s\n"
> .section .text
> .globl _start
> _start:
>movl %esp, %ebp
>addl $12, %ebp
> loop1:
>cmpl $0, (%ebp)
>je endit
>pushl (%ebp)
>pushl $output
>call printf
>addl $12, %esp
>addl $4, %ebp
>loop loop1
> endit:
>pushl $0
>call exit


Here is what I ran to make your example working on my Arch Linux machine


sudo pacman -S lib32-glibc
as --32 -o asm.o asm.s
ld -melf_i386 --dynamic-linker /lib/ld-linux.so.2 -L/usr/lib32 -lc -o asm asm.o
./asm


The only difference that I specified -L (library search path) to make
sure the linked can find 32-bit version of libc.


Re: [arch-general] What's with Ray Rashif?

2016-10-19 Thread Anatol Pomozov via arch-general
Hi

lv2 package in [testing] is broken

lv2: /usr/lib64/lv2/midi.lv2/midi.h exists in filesystem
lv2: /usr/lib64/lv2/midi.lv2/midi.ttl exists in filesystem
lv2: /usr/lib64/lv2/morph.lv2/lv2-morph.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/morph.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/morph.lv2/morph.h exists in filesystem
lv2: /usr/lib64/lv2/morph.lv2/morph.ttl exists in filesystem
lv2: /usr/lib64/lv2/options.lv2/lv2-options.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/options.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/options.lv2/options.h exists in filesystem
lv2: /usr/lib64/lv2/options.lv2/options.ttl exists in filesystem
lv2: /usr/lib64/lv2/parameters.lv2/lv2-parameters.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/parameters.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/parameters.lv2/parameters.h exists in filesystem
lv2: /usr/lib64/lv2/parameters.lv2/parameters.ttl exists in filesystem
lv2: /usr/lib64/lv2/patch.lv2/lv2-patch.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/patch.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/patch.lv2/patch.h exists in filesystem
lv2: /usr/lib64/lv2/patch.lv2/patch.ttl exists in filesystem
lv2: /usr/lib64/lv2/port-groups.lv2/lv2-port-groups.doap.ttl exists in
filesystem
lv2: /usr/lib64/lv2/port-groups.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/port-groups.lv2/port-groups.h exists in filesystem
lv2: /usr/lib64/lv2/port-groups.lv2/port-groups.ttl exists in filesystem
lv2: /usr/lib64/lv2/port-props.lv2/lv2-port-props.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/port-props.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/port-props.lv2/port-props.h exists in filesystem
lv2: /usr/lib64/lv2/port-props.lv2/port-props.ttl exists in filesystem
lv2: /usr/lib64/lv2/presets.lv2/lv2-presets.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/presets.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/presets.lv2/presets.h exists in filesystem
lv2: /usr/lib64/lv2/presets.lv2/presets.ttl exists in filesystem
lv2: /usr/lib64/lv2/resize-port.lv2/lv2-resize-port.doap.ttl exists in
filesystem
lv2: /usr/lib64/lv2/resize-port.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/resize-port.lv2/resize-port.h exists in filesystem
lv2: /usr/lib64/lv2/resize-port.lv2/resize-port.ttl exists in filesystem
lv2: /usr/lib64/lv2/schemas.lv2/dcs.ttl exists in filesystem
lv2: /usr/lib64/lv2/schemas.lv2/dct.ttl exists in filesystem
lv2: /usr/lib64/lv2/schemas.lv2/doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/schemas.lv2/foaf.ttl exists in filesystem
lv2: /usr/lib64/lv2/schemas.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/schemas.lv2/owl.ttl exists in filesystem
lv2: /usr/lib64/lv2/schemas.lv2/rdf.ttl exists in filesystem
lv2: /usr/lib64/lv2/schemas.lv2/rdfs.ttl exists in filesystem
lv2: /usr/lib64/lv2/schemas.lv2/xsd.ttl exists in filesystem
lv2: /usr/lib64/lv2/state.lv2/lv2-state.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/state.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/state.lv2/state.h exists in filesystem
lv2: /usr/lib64/lv2/state.lv2/state.ttl exists in filesystem
lv2: /usr/lib64/lv2/time.lv2/lv2-time.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/time.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/time.lv2/time.h exists in filesystem
lv2: /usr/lib64/lv2/time.lv2/time.ttl exists in filesystem
lv2: /usr/lib64/lv2/ui.lv2/lv2-ui.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/ui.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/ui.lv2/ui.h exists in filesystem
lv2: /usr/lib64/lv2/ui.lv2/ui.ttl exists in filesystem
lv2: /usr/lib64/lv2/units.lv2/lv2-units.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/units.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/units.lv2/units.h exists in filesystem
lv2: /usr/lib64/lv2/units.lv2/units.ttl exists in filesystem
lv2: /usr/lib64/lv2/uri-map.lv2/lv2-uri-map.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/uri-map.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/uri-map.lv2/uri-map.h exists in filesystem
lv2: /usr/lib64/lv2/uri-map.lv2/uri-map.ttl exists in filesystem
lv2: /usr/lib64/lv2/urid.lv2/lv2-urid.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/urid.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/urid.lv2/urid.h exists in filesystem
lv2: /usr/lib64/lv2/urid.lv2/urid.ttl exists in filesystem
lv2: /usr/lib64/lv2/worker.lv2/lv2-worker.doap.ttl exists in filesystem
lv2: /usr/lib64/lv2/worker.lv2/manifest.ttl exists in filesystem
lv2: /usr/lib64/lv2/worker.lv2/worker.h exists in filesystem
lv2: /usr/lib64/lv2/worker.lv2/worker.ttl exists in filesystem
Errors occurred, no packages were upgraded.

On Wed, Oct 19, 2016 at 3:31 PM, Ray Rashif via arch-general
 wrote:
> On 10 October 2016 at 11:10, David Runge  wrote:
>> On 2016-09-28 00:32:38 (+0600), Ray Rashif via arch-general wrote:
>>> I have just gotten 

Re: [arch-general] What is aclocal?

2016-04-20 Thread Anatol Pomozov
Hi

On Wed, Apr 20, 2016 at 2:14 AM, Gerhard Kugler  wrote:
>> In automake, which belongs to the base-devel group, which you are supposed
>> to have installed if you want to build packages from AUR
>
> This was the solution. I had to install automake which was not
> installed.

As it was told above you need to install whole 'base-devel' group.
Please read documentation for your own benefit
https://wiki.archlinux.org/index.php/Arch_User_Repository


Re: [arch-general] race condition when upgrading the new ncurses package

2015-09-19 Thread Anatol Pomozov
Hi

On Sat, Sep 19, 2015 at 5:24 PM, Ralf Mardorf 
wrote:

>
> However, installing just a few packages instead of all packages needs
> to be done, if the Internet connection get interrupted too often to
> make a complete upgrade in one step. So a note by the Arch news IMO
> still is useful.
>

Pacman downloads all packages before installing any of them. So if you have
a bad Internet connection then you'll be stuck at the first (download)
step. It will not leave your system in broken state.


Re: [arch-general] PKGBUILD ERROR

2015-09-11 Thread Anatol Pomozov
Hi

On Fri, Sep 11, 2015 at 12:28 AM, mudongliang 
wrote:
>
> ==> ERROR: Running makepkg as root is not allowed as it can cause
> permanent,
> catastrophic damage to your system.
>

What exactly in the message above is not clear for you?


Re: [arch-general] Realtek 8111/8168/8411 Blues - cannot get dhcpcd address (link UP)

2015-08-20 Thread Anatol Pomozov
Hi

On Thu, Aug 20, 2015 at 3:31 PM, David C. Rankin
drankina...@suddenlinkmail.com wrote:
 All,

   As a continuation of the disc controller failure/system rebuild, I have
 the new box built and a pair of fresh drives waiting for a new Arch install.
 This motherboard has the Realtek 8111/8168/8411 chipset. (Gigabyte
 GA-990FXA-UD3 motherboard).

   I am booting with the latest install medium which boots fine in legacy or
 EUFI mode. No matter what I do, I cannot get an IPv4 address.

Check this thread https://bbs.archlinux.org/viewtopic.php?id=200514 is
it what you see?

What happens if you downgrade to dhcpcd-6.9.0 ?


 I have read:

 https://wiki.archlinux.org/index.php/Network_configuration#Realtek_no_link_.2F_WOL_problem

 (that's not the problem, link light is on, activity indicator is flashing,
 and link is reported 'Up' by 'ip link')

   I have read:

 https://wiki.archlinux.org/index.php/Network_configuration#Realtek_RTL8111.2F8168B

  - installed (pacman -U r8168-8.040.00-5-x86_64.pkg.tar.xz)
  - blacklisted r8169
  - loaded r8168
  - confirmed the NIC is using r8168 w/lspci -v
  - systemctl restart dhcpcd (many times)
  - systemctl status dhcpcd reports
   no IPv6 Routers
   no IPv4 Leases
   request timeout

 It's not the cable or my dhcpd server, I boot the box with the failed disc
 controller, and it is assigned an address just fine. (same cable)

 I'm stuck, looking at the log on my dhcp server, the requests are never
 seen. It's like the card isn't sending, but the link light is fine and the
 activity light on the NIC is flashes when it sees traffic?

 What else can I try?


 --
 David C. Rankin, J.D.,P.E.


Re: [arch-general] Debugging third-party library's segfault if its caused by system update?

2015-06-09 Thread Anatol Pomozov
Hi

On Tue, Jun 9, 2015 at 12:26 AM, Oon-Ee Ng ngoonee.t...@gmail.com wrote:
 I use the openni2 library to access an Asus Xtion Pro Live camera,
 installed from the AUR and working fine up till 2+ weeks ago.

 After a 2 week holiday, the most recent system update caused segfaults
 to happen within the library (both before and after rebuilding it),
 without any change to the code calling the library. Same segfault
 happens with the simple sample applications included in the library
 (previously running fine).

 How do I track down the issue? The library's source code is available,
 but without knowing it well I'm unsure where to even begin.

 Normally I'd contact the authors, but as this issue was caused (on my
 system) by a system update I think I'd need to do some tracking down
 first.

Do you use Intel CPU? Try to setup microcode and see if it helps
https://wiki.archlinux.org/index.php/Microcode


Re: [arch-general] Aftpd won't start without nogroup group

2015-06-07 Thread Anatol Pomozov
Hi

2015-06-07 8:38 GMT-07:00 Bráulio Bhavamitra brau...@eita.org.br:
 Hello all,

 I don't know where to report this, so sorry to do it here.

 atftpd package needs nogroup so that atftpd can start, otherwise it will
 exit.

atftpd uses group nobody by default
https://projects.archlinux.org/svntogit/community.git/tree/trunk/atftpd.conf?h=packages/atftp
that is a system group https://wiki.archlinux.org/index.php/Users_and_groups


Re: [arch-general] Connection through USB to TTL Serial Cable

2015-05-20 Thread Anatol Pomozov
Hi

On Wed, May 20, 2015 at 11:55 AM, Csányi Pál csanyi...@gmail.com wrote:
 Hello,

 I have a
 FTDI TTL-232RG-VREG3V3-WE
 USB to TTL Serial Cable.

 On my Arch linux I want to setup a connection through this cable.

 How can I achieve this goal?

 The output of the 'lsusb' command is:
 Bus 002 Device 005: ID 0403:6001 Future Technology Devices
 International, Ltd FT232 USB-Serial (UART) IC

 I'm trying to follow steps described here:
 http://linux-sunxi.org/Cubieboard/TTL

 but I have more difficulties.

 1. I can't find to install the 'cu' utility neither on Arch linux
 repository nor in AUR.

cu is in Arch repos as it was told above. There are other terminal
applications: minicom, com.

 2. There is no '/dev/ttyUSB0' on my system here.

Then it means driver has other name. Check 'dmesg' - kernel should
write usb device name that was created. It can be /dev/ttyUSB1, ...
/dev/ttyACM0, ...

Also make sure this file is readable to your user - you might need be
added to right UNIX group.


 So eg.
 screen /dev/ttyUSB0 115200

 doesn't work here:
 Cannot exec '/dev/ttyUSB0': No such file or directory

 I can't find any advices on Arch Wiki.

 Any advices will be appreciated!

 --
 Regards from Pal


Re: [arch-general] Severity of Failed checksum for PKGBUILD

2015-02-20 Thread Anatol Pomozov
Hi

On Thu, Feb 19, 2015 at 2:24 PM, Lukas Jirkovsky l.jirkov...@gmail.com wrote:
 On 19 February 2015 at 21:42, Doug Newgard scim...@archlinux.info wrote:
 You can't. If upstream provides a checksum, that gives you some verification,
 but since github doesn't, there's no way to verify any of it.

 I don't know about github, but with bitbucket the checksums of these
 generated tarballs may change occasionally as I had this issue with
 luxrender.

Any project that uses JGit (like Gerrit used by chromium) has this
problem as well.

https://bugs.eclipse.org/bugs/show_bug.cgi?id=445819

 However, the sources were always the same, it was the
 metadata that changed.


Re: [arch-general] Maybe bug in glibc

2015-02-19 Thread Anatol Pomozov
Hi

On Thu, Feb 19, 2015 at 1:28 PM, Klaus thor...@brothersofgrey.net wrote:
 Anatol Pomozov schrieb:
 Hi

 On Wed, Feb 18, 2015 at 1:43 PM, Klaus thor...@brothersofgrey.net wrote:
  Anatol Pomozov schrieb:
  Hi
 
  On Wed, Feb 18, 2015 at 12:55 PM, Klaus thor...@brothersofgrey.net 
  wrote:
   Hi,
  
   my firefox and seamonkey have been segfaulting for a month or so.
  
   Running  the software with gdb produces the following errors:
  
   Firefox:
   Program received signal SIGSEGV, Segmentation fault.
   0x77de6567 in _dl_relocate_object () from
   /lib64/ld-linux-x86-64.so.2
 
  Recompile glib and then provide more meaningful backtrace (all the
  functions in the stack and its parameters) for your error.
 
 
  This is the entire stack:
 
 
 [...]
 

 You did *not* recompile glibc with debug symbols enabled. The stack is
 still not very useful.

 I remember similar glibc bug https://bugs.archlinux.org/task/38069 but
 it should be fixed in 2.21. Make sure your system is up-to-date.


 Ok, here is a better one:

 Program received signal SIGSEGV, Segmentation fault.
 _dl_relocate_object (scope=0x7fffd49cd358,
 reloc_mode=reloc_mode@entry=1,
 consider_profiling=consider_profiling@entry=0) at dl-reloc.c:238
 238 const char *strtab = (const void *) D_PTR (l,
 l_info[DT_STRTAB]);
 (gdb) bt
 #0  _dl_relocate_object (scope=0x7fffd49cd358,
 reloc_mode=reloc_mode@entry=1,
 consider_profiling=consider_profiling@entry=0) at dl-reloc.c:238
 #1  0x77dee8d1 in dl_open_worker (a=a@entry=0x7ffe84e8)
 at dl-open.c:418
 #2  0x77dea145 in _dl_catch_error (
 objname=objname@entry=0x7ffe84d8,
 errstring=errstring@entry=0x7ffe84e0,
 mallocedp=mallocedp@entry=0x7ffe84d7,
 operate=operate@entry=0x77dee590 dl_open_worker,
 args=args@entry=0x7ffe84e8) at dl-error.c:187
 #3  0x77dee003 in _dl_open (
 file=0x7fffcdae7148 /usr/lib/mozilla/plugins/libflashplayer.so,
 mode=-2147483647,
 caller_dlopen=0x721a158b PR_LoadLibraryWithFlags+219,
 nsid=-2,
 argc=optimized out, argv=optimized out, env=0x76ca7800)
 at dl-open.c:652

The problem with opening file
/usr/lib/mozilla/plugins/libflashplayer.so. Maybe it is corrupted?

Try to reinstall package flashplugin.

 #4  0x779bafe3 in dlopen_doit (a=a@entry=0x7ffe8730) at
 dlopen.c:66
 #5  0x77dea145 in _dl_catch_error (objname=0x76c12050,
 errstring=0x76c12058, mallocedp=0x76c12048,
 operate=0x779baf80 dlopen_doit, args=0x7ffe8730)
 at dl-error.c:187
 #6  0x779bb61a in _dlerror_run (
 operate=operate@entry=0x779baf80 dlopen_doit,
 ---Type return to continue, or q return to quit---
 args=args@entry=0x7ffe8730) at dlerror.c:163
 #7  0x779bb083 in __dlopen (file=optimized out,
 mode=optimized out)
 at dlopen.c:87


 My System is up-to-date.

 Greetings

 --
 Jabber: thor...@deshalbfrei.org  PGP/GnuPG: 0x326F6D7B


Re: [arch-general] Maybe bug in glibc

2015-02-18 Thread Anatol Pomozov
Hi

On Wed, Feb 18, 2015 at 1:43 PM, Klaus thor...@brothersofgrey.net wrote:
 Anatol Pomozov schrieb:
 Hi

 On Wed, Feb 18, 2015 at 12:55 PM, Klaus thor...@brothersofgrey.net wrote:
  Hi,
 
  my firefox and seamonkey have been segfaulting for a month or so.
 
  Running  the software with gdb produces the following errors:
 
  Firefox:
  Program received signal SIGSEGV, Segmentation fault.
  0x77de6567 in _dl_relocate_object () from
  /lib64/ld-linux-x86-64.so.2

 Recompile glib and then provide more meaningful backtrace (all the
 functions in the stack and its parameters) for your error.


 This is the entire stack:

 Program received signal SIGSEGV, Segmentation fault.
 0x77de6567 in _dl_relocate_object () from
 /lib64/ld-linux-x86-64.so.2
 (gdb) bt
 #0  0x77de6567 in _dl_relocate_object () from
 /lib64/ld-linux-x86-64.so.2
 #1  0x77dee719 in dl_open_worker () from
 /lib64/ld-linux-x86-64.so.2
 #2  0x77dea0a4 in _dl_catch_error () from
 /lib64/ld-linux-x86-64.so.2
 #3  0x77dede53 in _dl_open () from /lib64/ld-linux-x86-64.so.2
 #4  0x779bafc9 in ?? () from /usr/lib/libdl.so.2
 #5  0x77dea0a4 in _dl_catch_error () from
 /lib64/ld-linux-x86-64.so.2
 #6  0x779bb599 in ?? () from /usr/lib/libdl.so.2
 #7  0x779bb061 in dlopen () from /usr/lib/libdl.so.2

You did *not* recompile glibc with debug symbols enabled. The stack is
still not very useful.

I remember similar glibc bug https://bugs.archlinux.org/task/38069 but
it should be fixed in 2.21. Make sure your system is up-to-date.

 #8  0x721a158b in PR_LoadLibraryWithFlags () from
 /usr/lib/libnspr4.so
 #9  0x73da1186 in ?? () from /usr/lib/firefox/libxul.so
 #10 0x73da1663 in ?? () from /usr/lib/firefox/libxul.so
 #11 0x73d8c922 in ?? () from /usr/lib/firefox/libxul.so
 #12 0x73d8d0de in ?? () from /usr/lib/firefox/libxul.so
 #13 0x73d8d262 in ?? () from /usr/lib/firefox/libxul.so
 #14 0x73d87f9a in ?? () from /usr/lib/firefox/libxul.so
 #15 0x73d8dc6a in ?? () from /usr/lib/firefox/libxul.so
 #16 0x750d58c9 in NS_InvokeByIndex () from
 /usr/lib/firefox/libxul.so
 #17 0x74b168b0 in ?? () from /usr/lib/firefox/libxul.so
 #18 0x74b1dd9f in ?? () from /usr/lib/firefox/libxul.so
 #19 0x74fd882b in ?? () from /usr/lib/firefox/libxul.so
 #20 0x74fd1ee3 in ?? () from /usr/lib/firefox/libxul.so
 #21 0x74fca7ba in ?? () from /usr/lib/firefox/libxul.so
 #22 0x74fd88ee in ?? () from /usr/lib/firefox/libxul.so
 #23 0x74f503de in ?? () from /usr/lib/firefox/libxul.so
 #24 0x74fd882b in ?? () from /usr/lib/firefox/libxul.so
 #25 0x74fd9584 in ?? () from /usr/lib/firefox/libxul.so
 #26 0x74fbea93 in ?? () from /usr/lib/firefox/libxul.so
 #27 0x74fbb9d3 in
 js::CrossCompartmentWrapper::call(JSContext*, JS::HandleJSObject*,
 JS::CallArgs const) const () from /usr/lib/firefox/libxul.so
 #28 0x74fbfee1 in js::proxy_Call(JSContext*, unsigned int,
 JS::Value*) () from /usr/lib/firefox/libxul.so
 #29 0x74fd8994 in ?? () from /usr/lib/firefox/libxul.so
 #30 0x74fd1ee3 in ?? () from /usr/lib/firefox/libxul.so
 #31 0x74fca7ba in ?? () from /usr/lib/firefox/libxul.so
 #32 0x74fd88ee in ?? () from /usr/lib/firefox/libxul.so
 #33 0x74fd961e in ?? () from /usr/lib/firefox/libxul.so
 #34 0x74fbea93 in ?? () from /usr/lib/firefox/libxul.so
 #35 0x74fbb9d3 in
 js::CrossCompartmentWrapper::call(JSContext*, JS::HandleJSObject*,
 JS::CallArgs const) const () from /usr/lib/firefox/libxul.so
 #36 0x74fbfee1 in js::proxy_Call(JSContext*, unsigned int,
 JS::Value*) () from /usr/lib/firefox/libxul.so
 #37 0x74fd8994 in ?? () from /usr/lib/firefox/libxul.so
 #38 0x74fd1ee3 in ?? () from /usr/lib/firefox/libxul.so
 #39 0x74fca7ba in ?? () from /usr/lib/firefox/libxul.so
 #40 0x74fd88ee in ?? () from /usr/lib/firefox/libxul.so
 #41 0x74f503de in ?? () from /usr/lib/firefox/libxul.so
 #42 0x74fd882b in ?? () from /usr/lib/firefox/libxul.so
 #43 0x74fd9584 in ?? () from /usr/lib/firefox/libxul.so
 ---Type return to continue, or q return to quit---
 #44 0x74fbea93 in ?? () from /usr/lib/firefox/libxul.so
 #45 0x74fbb9d3 in
 js::CrossCompartmentWrapper::call(JSContext*, JS::HandleJSObject*,
 JS::CallArgs const) const () from /usr/lib/firefox/libxul.so
 #46 0x74fbfee1 in js::proxy_Call(JSContext*, unsigned int,
 JS::Value*) () from /usr/lib/firefox/libxul.so
 #47 0x74fd8994 in ?? () from /usr/lib/firefox/libxul.so
 #48 0x74fd1ee3 in ?? () from /usr/lib/firefox/libxul.so
 #49 0x74fca7ba in ?? () from /usr/lib/firefox/libxul.so
 #50 0x74fd88ee in ?? () from /usr/lib/firefox/libxul.so
 #51 0x74fd961e in ?? () from /usr/lib/firefox/libxul.so
 #52 0x74fbea93 in ?? () from /usr

Re: [arch-general] Maybe bug in glibc

2015-02-18 Thread Anatol Pomozov
Hi

On Wed, Feb 18, 2015 at 12:55 PM, Klaus thor...@brothersofgrey.net wrote:
 Hi,

 my firefox and seamonkey have been segfaulting for a month or so.

 Running  the software with gdb produces the following errors:

 Firefox:
 Program received signal SIGSEGV, Segmentation fault.
 0x77de6567 in _dl_relocate_object () from
 /lib64/ld-linux-x86-64.so.2

Recompile glib and then provide more meaningful backtrace (all the
functions in the stack and its parameters) for your error.

 Seamonkey:
 Program received signal SIGSEGV, Segmentation fault.
 0x77de6567 in _dl_relocate_object () from
 /lib64/ld-linux-x86-64.so.2

 /lib64/ld-linux-x86-64.so.2 is owned by glibc 2.21-2.

 I have already tested the laptop ram with memtest86 and I got no
 errors.

 Deleting and creating new browser profiles in home does not change
 anything, too.

 Does anyone have the same problem?


 Greetings,
 Klaus

 --
 Jabber: thor...@deshalbfrei.org  PGP/GnuPG: 0x326F6D7B


Re: [arch-general] Running Arch linux on a headless ppc GNU/Linux box?

2015-02-10 Thread Anatol Pomozov
Hi

On Tue, Feb 10, 2015 at 10:26 AM, Csányi Pál csanyi...@gmail.com wrote:
 Hi,

 I have a headless PowerPC box called Bubba Two.

 Could one install on it the Arch linux system?

No. Arch does not support PPC architecture.


 I'm running on it now a Debian Wheezy GNU/Linux system.

 $ uname -a
 Linux b2 3.2.62-1 #1 Mon Aug 25 04:22:40 UTC 2014 ppc GNU/Linux

 --
 Regards from Pal


Re: [arch-general] Why are CA certifcates writable for every user?

2015-02-05 Thread Anatol Pomozov
Hi

On Thu, Feb 5, 2015 at 11:15 AM, David Rosenstrauch dar...@darose.net wrote:
 Symlinks often (always?) show as 777 permissions.

Linux manpage for symlinks states
http://man7.org/linux/man-pages/man7/symlink.7.html

 On Linux, the permissions of a symbolic link are not used in any
 operations; the permissions are always 0777 (read, write, and execute
 for all user categories), and can't be changed.


Re: [arch-general] dhcpcd not working after install from custom iso

2014-12-17 Thread Anatol Pomozov
Hi

On Wed, Dec 17, 2014 at 10:22 PM, Christian Demsar
vixsom...@fastmail.com wrote:

 I had internet connection when installing from an iso I built using the
 archiso tools, but dhcpcd isn't connecting any more (starting via
 sytemd). I've also had internet access in previous installs of archlinux
 and FreeBSD, so I don't think there's anything wrong with the hardware.

 [dmesg output] http://pastebin.com/vtVRid1Y
 [ip link output] http://pastebin.com/gaZxUCmf

 The device I'm using is listed as enp2s0. It hangs, even though it's set
 up (pastebin for proof). Waiting for carrier? I also noticed that at the
 bottom of my dmesg, it reports only IPv6, not IPv4. Is this normal
 behavior? I think my network only supports IPv4, for dhcp anyway.

 Since I have to register the addresses of the NICs with the ISP, only
 enp2s0 should hypothetically be able to connect, not enp1s0. This is
 also the first time I've had a problem with dhcpcd.

 If it makes a difference, I built a custom iso (base) and dropped the
 bcache-tools package in /etc so I could set that up. I made no other
 modifications to the bulid.

 dhcpcd is at version 6.6.4-1 and is fully up-to-date.

 == LOG SNIPPET ==

 dhcpcd@enp2s0.service - dhcpcd on enp2s0
 Loaded: loaded (/usr/lib/systemd/system/dhcpcd@.service; disabled)
 Active: failed (Result: exit-code) since Wed 2014-12-17 09:15:46 EST;
 51s ago
 Process: 542 ExecStart=/usr/bin/dhcpcd -q -w %I (code=exited,
 status=1/FAILURE)

This log does not provide enough information information about the
failure. Run dhcpcd manually with debug enabled

sudo dhcpcd -q -w enp2s0 -d

And you'll have better luck to debug this problem by sending your
questions upstream maillist.


 Dec 17 09:15:16 vss dhcpcd[542]: version 6.6.4 starting
 Dec 17 09:15:16 vss dhcpcd[542]: enp2s0: waiting for carrier
 Dec 17 09:15:46 vss systemd[1]: dhcpcd@enp2s0.service: control process
 exited, code=exited status=1
 Dec 17 09:15:46 vss systemd[1]: Failed to start dhcpcd on enp2s0.
 Dec 17 09:15:46 vss systemd[1]: Unit dhcpcd@enp2s0.service entered
 failed state.
 Dec 17 09:15:46 vss systemd[1]: dhcpcd@enp2s0.service failed.
 --
 vixsomnis


Re: [arch-general] Cannot install mlpack from AUR

2014-10-29 Thread Anatol Pomozov
Hi

On Wed, Oct 29, 2014 at 1:10 PM, Thorsten Jolitz tjol...@gmail.com wrote:

 Hi List,

 I cannot install the mlpack package from AUR

 ,
 | ID  : 105550
 | Name: mlpack
 | Version : 1.0.9-1
 | Maintainer  : govg
 | : https://aur.archlinux.org/account/govg
 | Description : a scalable c++ machine learning library
 | Home Page   : http://www.mlpack.org
 | AUR Page: https://aur.archlinux.org/packages/mlpack
 | Package Base: https://aur.archlinux.org/pkgbase/mlpack
 | License : LGPLv3
 | Category: science
 | Votes   : 2
 | Out Of Date : No
 | Submitted   : 2012-07-19 00:54:26
 | Last Modified   : 2014-08-27 13:14:07
 `

 because as the final step of makepkg it runs like 400+ tests, which is
 very timeconsuming, and makes the package build fail since there are not
 only a lot of warnings but also 3 failing tests.

You can skip the tests by 'makepkg --nocheck'


 Did others face the same problem?

 --
 cheers,
 Thorsten


Re: [arch-general] Kernel code dump retrieval

2014-10-16 Thread Anatol Pomozov
Hi

On Thu, Oct 16, 2014 at 4:57 AM, Leonidas Spyropoulos
artafi...@gmail.com wrote:
 Hello list,

 I'm experiencing an issue while compiling big projects (i.e. linux
 kernel but not limited to that). The issue seems to be related to
 CPUFREQ and I'm trying to track it down.

 While compiling the linux-ck kernel the kernel panics and produce a core
 dumps.

When you say produce a core dumps what exactly you see. How do you
know it produces the kernel dump?

 I'm trying to get the core dump but I'm not able to access it after
 hard reset. I tried enabling journalctl Storage=Auto to write to disk without
 luck.

After the crash happens kernel cannot write anything to disk nor send
via network. Dealing with disk/network/... requires valid kernel data
structures and you don't have them anymore.

Once kernel crashed it has only one option - reboot.


 My FS is btrfs and the CPU is AMD FX-8120. The CPU is not overclocked
 [1] and the Cool And Quiet is enabled alond with other power saving
 options in BIOS (like C6 State).

 How can I access the kernel core dump after crash?

Check kdump https://wiki.archlinux.org/index.php/Kdump - it is
probably what you are looking for.


Re: [arch-general] Kernel code dump retrieval

2014-10-16 Thread Anatol Pomozov
Hi

On Thu, Oct 16, 2014 at 9:11 AM, Leonidas Spyropoulos
artafi...@gmail.com wrote:
 On 16/10/14, Anatol Pomozov wrote:
 When you say produce a core dumps what exactly you see. How do you
 know it produces the kernel dump?

 I usually build the AUR package from within X. But sometimes I do it on
 another TTY. On these cases where I do it from TTY I was able to see
 partly a core dump (messages about kernel panic and then some more
 output). The problem was that it was part of it and I could not scroll
 to see the whole message.

So you don't need full kernel memory dump that preserves content of
RAM on crash. You just need kernel stack trace message, do you?

Check pause_on_oops kernel parameter
https://www.kernel.org/doc/Documentation/kernel-parameters.txt it
might help you to prevent scrolling.

Another option is to use serial port and watch kernel messages from a
remote machine 
https://wiki.archlinux.org/index.php/Working_with_the_serial_console





 After the crash happens kernel cannot write anything to disk nor send
 via network. Dealing with disk/network/... requires valid kernel data
 structures and you don't have them anymore.

 Once kernel crashed it has only one option - reboot.

 Check kdump https://wiki.archlinux.org/index.php/Kdump - it is
 probably what you are looking for.

 But that invoves a kernel compilation (right?) and I seem to end up in the 
 same
 problem as before -
 kernel crashing when compiling big projects.
Hm... I would recommend you to run memory test to make sure it is not
a hardware problem with your RAM.


 Is there a kernel with Kdump enabled already?

I do not think so. I was compiling my custom kdump kernel, but it can
be done on other machine and then be installed on the problematic
machine.


Re: [arch-general] imagemagick 6.8.9.8-1 is slower than 6.8.9.7-1

2014-10-16 Thread Anatol Pomozov
Hi

On Thu, Oct 16, 2014 at 7:39 PM, Matthew Wynn m-w...@live.com wrote:

 After upgrading to imagemagick 6.8.9.8-1, I've found it to be a lot slower
 than 6.8.9.7-1.  I only get this issue when downloading from the
 repositories or using the PKGBUILD, not when compiling using the
 instructions at imagemagic.org.

 I've discussed the issue with the ImageMagick developers, which you can
 find at
 http://www.imagemagick.org/discourse-server/viewtopic.php?f=3t=26369sid=a29400f51bd723e98da6b74c0819e9b3.
 Even when trying to convert a 500x500 image to 100x100, it appears that an
 image that is 2048x1536 is created.

 I've tried this on multiple Arch machines with the same result.


Use code profiler (e.g. 'perf') to record information where the tool spends
its CPU cycles. Then compare what is the difference and why newer version
is slower.


Re: [arch-general] imagemagick 6.8.9.8-1 is slower than 6.8.9.7-1

2014-10-16 Thread Anatol Pomozov
Hi

On Thu, Oct 16, 2014 at 8:30 PM, Matthew Wynn m-w...@live.com wrote:

 As shown in the forum post I linked, here is a strace for 6.8.9.7:

  $ strace -c convert /tmp/test.jpg -limit thread 4 -thumbnail
 100x100 -gravity center -background none -extent 100x100
 /tmp/mpdcover.png
 % time seconds  usecs/call callserrors syscall
 -- --- --- - - 
 100.000.33   147   read
   0.000.00   0 3   write
   0.000.00   08431 open
   0.000.00   054   close
   0.000.00   012 3 stat
   0.000.00   052   fstat
   0.000.00   010   lseek
   0.000.00   0   117   mmap
   0.000.00   072   mprotect
   0.000.00   025   munmap
   0.000.00   0 9   brk
   0.000.00   011   rt_sigaction
   0.000.00   019   rt_sigprocmask
   0.000.00   0 4 1 access
   0.000.00   0 1   clone
   0.000.00   0 1   execve
   0.000.00   0 2   getdents
   0.000.00   0 1   getcwd
   0.000.00   0 1   readlink
   0.000.00   0 2   getrlimit
   0.000.00   018   times
   0.000.00   0 1   arch_prctl
   0.000.00   0 8   futex
   0.000.00   0 1   sched_getaffinity
   0.000.00   0 1   set_tid_address
   0.000.00   0 2 1 openat
   0.000.00   0 1   set_robust_list
 -- --- --- - - 
 100.000.33   55936 total

 and for 6.8.9.8
 $ strace -c convert /tmp/test.jpg -limit thread 4 -thumbnail
 100x100 -gravity center -background none -extent 100x100
 /tmp/mpdcover.png
 % time seconds  usecs/call callserrors syscall
 -- --- --- - - 
  56.990.001325  2946   munmap
  43.010.001000  1856   futex
   0.000.00   050   read
   0.000.00   0 4   write
   0.000.00   08931 open
   0.000.00   059   close
   0.000.00   02313 stat
   0.000.00   055   fstat
   0.000.00   010   lseek
   0.000.00   0   139   mmap
   0.000.00   076   mprotect
   0.000.00   012   brk
   0.000.00   011   rt_sigaction
   0.000.00   019   rt_sigprocmask
   0.000.00   0 5 1 access
   0.000.00   0 9   madvise
   0.000.00   0 3   clone
   0.000.00   0 1   execve
   0.000.00   0 2   getdents
   0.000.00   0 1   getcwd
   0.000.00   0 1   readlink
   0.000.00   0 2   getrlimit
   0.000.00   056   times
   0.000.00   0 1   arch_prctl
   0.000.00   0 1   sched_getaffinity
   0.000.00   0 1   set_tid_address
   0.000.00   0 2 1 openat
   0.000.00   0 1   set_robust_list
 -- --- --- - - 
 100.000.002325   73546 total


The strace output says that cpu time spent by the syscall in kernel is
small (2.3 ms) so the rest is spent in userspase. You need a profiler (e.g.
perf) get more information. Run perf record -g $YOUR_COMMAND then perf
report and perf annotate. perf annotate shows better results if you
recompile imagemagick with debug symbols enabled.

I highly recommend to learn perf (independently of this issue), it is very
powerful and useful development tool. See more information here
https://perf.wiki.kernel.org/index.php/Tutorial



 If you have a specific perf command that would be more revealing?

  Date: Thu, 16 Oct 2014 19:54:21 -0700
  From: anatol.pomo...@gmail.com
  To: arch-general@archlinux.org
  Subject: Re: 

Re: [arch-general] wsgi_mod

2014-09-16 Thread Anatol Pomozov
Hi

On Tue, Sep 16, 2014 at 8:50 AM, John Dey jsde...@gmail.com wrote:
 I have installed wsgi_mod and followed the instructions at 
 https://wiki.archlinux.org/index.php/mod_wsgi.  I am getting a server error:

 **
 Server error!

 The server encountered an internal error and was unable to complete your 
 request. Either the server is overloaded or there was an error in a CGI 
 script.

 If you think this is a server error, please contact the webmaster.

 Error 500

 jsdey.com
 Apache/2.4.10 (Unix) mod_wsgi/4.2.8 Python/3.4.1
 ***

 Can anyone give me some guidance re resolving error.  Thanks.

Check systemd journal and Apache logs for more information
https://wiki.archlinux.org/index.php/Apache_HTTP_Server#Apache_Status_and_Logs


Re: [arch-general] Updates for avr-gcc and avr-libc

2014-08-20 Thread Anatol Pomozov
Hi

On Wed, Aug 20, 2014 at 10:54 AM, Karol Babioch ka...@babioch.de wrote:
 Hi,

 the current maintainer of avr-gcc and avr-libc, Jakob Gruber, seems to
 be away until October [1]. In the meantime new versions of both packages
 have been released. Due to a bug within avr-gcc 4.9.0, avr-libc 1.8.1
 cannot be built [2] and avr-gcc 4.9.1 is required, so actually both
 packages should be upgraded at once.

 Simply bumping the numbers within the appropriate PKGBUILD seems to be
 sufficient, some developer and/or TU could take care of this?

Just pushed updated avr-gcc avr-libc avr-gdb to [community-testing].
Should be available at your mirror soon. Please install the packages
from that repository and file a bug if you see any regressions.


Re: [arch-general] Help diagnosing kworker 'bug'

2014-08-08 Thread Anatol Pomozov
Hi

On Fri, Aug 8, 2014 at 6:58 PM, Oon-Ee Ng ngoonee.t...@gmail.com wrote:
 On Wed, Aug 6, 2014 at 3:43 PM, Oon-Ee Ng ngoonee.t...@gmail.com wrote:
 On Wed, Aug 6, 2014 at 10:40 AM, Anatol Pomozov
 anatol.pomo...@gmail.com wrote:
 'perf' is a great and very powerful tool that allow to debug problems
 like this. Run '# perf top -g -p $PID' and it will show where the
 process spends *cpu cycles*. It should be enough to understand what
 kworker thread does. For all curious minds I highly recommend to read
 this tutorial https://perf.wiki.kernel.org/index.php/Tutorial

 Thanks, if my boy gets to sleep early tonight I'll do that.

 Having tried that out, I don't really understand the output. It seems
 the first column is CPU usage and the second is...? IO?

 Anyway these are the top 3 things in my output after a short amount of
 time. Other things which are low in CPU usage and high in the second
 column are find_next_zero_bit and _raw_spin_lock. Not sure what I
 should glean from this.
 +   17.74% 0.10%  [kernel]  [k] __filemap_fdatawrite_range
 +   15.04% 0.02%  [kernel]  [k] filemap_fdatawrite_range
 +9.93% 9.93%  [kernel]  [k] find_next_bit


The first column is CPU usage, not sure about the second column. Click
on [+] symbol and it will show you the full call graph for this
function. So it will let you understand what subsystem calls it. If it
is btrfs then please contact upstream
http://vger.kernel.org/vger-lists.html#linux-btrfs

Another way to get more information about this problem is to use
kernel traces. Let's enable block and writeback events:

sudo su
cd /sys/kernel/debug/tracing
echo 1  events/writeback/enable
echo 1  events/block/enable
echo 1  tracing_on
cat trace_pipe

there will some information like processid, inode... Maybe you'll see
some pattern in the writes etc..

In any case this problem sounds like an upstream issue and it is
better to contact them.


Re: [arch-general] Help diagnosing kworker 'bug'

2014-08-05 Thread Anatol Pomozov
Hi

On Tue, Aug 5, 2014 at 4:37 PM, Oon-Ee Ng ngoonee.t...@gmail.com wrote:
 Yesterday night I noticed (just before performing an update my conky
 showing high continuous writing to root. iotop -Pa shows this:-

 Total DISK READ :   0.00 B/s | Total DISK WRITE :   8.93 M/s
 Actual DISK READ:   0.00 B/s | Actual DISK WRITE:  11.06 M/s
   PID  PRIO  USER DISK READ  DISK WRITE  SWAPIN IOCOMMAND
   112 be/4 root  0.00 B 64.91 M  0.00 % 24.34 % [kworker/u16:3]
 11936 be/4 root  0.00 B  0.00 B  0.00 %  0.06 % [kworker/1:1]
 28794 be/4 root  0.00 B 36.00 K  0.00 %  0.00 % [kworker/u16:1]

 This is after roughly 6-7 seconds, and 65 MB has already been written
 by that kworker thread.

How long this IO activity takes? Could it be some kind of automatic
defragmentation or some other internal btrfs background optimization?
A good idea is to check btrfs changelog for 3.16 kernel release.


 As I said, this was already happening before an upgrade. I ran the
 upgrade anyway, which upgraded linux to 3.16-2, and still got the same
 thing. Yes, I'm using [testing].

 Any ideas on how to proceed? Next thing I'm going to try is
 downgrading linux to 3.15, but I thought I'd post this here first in
 case I don't make it back.

'perf' is a great and very powerful tool that allow to debug problems
like this. Run '# perf top -g -p $PID' and it will show where the
process spends *cpu cycles*. It should be enough to understand what
kworker thread does. For all curious minds I highly recommend to read
this tutorial https://perf.wiki.kernel.org/index.php/Tutorial


Re: [arch-general] Error installing linux-3.15.5-1

2014-07-11 Thread Anatol Pomozov
Hi

On Fri, Jul 11, 2014 at 6:06 AM, Thorsten Jolitz tjol...@gmail.com wrote:

 Hi List,

 just did my usual pacman -Syu and was offered a new linux version, which
 I accepted to install, but got 1000s of error messages like this:

 ,
 | linux: /usr/lib/modules/3.15.5-1-ARCH/modules.symbols.bin exists in
 | filesystem
 | linux: /usr/lib/modules/extramodules-3.15-ARCH/version exists
 | in filesystem
 `

 A few hours before pacman complained that the DB is locked, and I
 removed the db-lock (as proposed) since I was sure I had no pacman
 operation going on.

 Do I have a corrupted pacman DB now? What would be the steps for
 diagnosis and repair?

https://wiki.archlinux.org/index.php/Pacman#I_get_an_error_when_updating:_.22file_exists_in_filesystem.22.21


Re: [arch-general] Installing sage-mathemtics failed

2014-07-09 Thread Anatol Pomozov
Hi

On Wed, Jul 9, 2014 at 6:03 AM, Csányi Pál csanyi...@gmail.com wrote:
 Hi,

 I just want to install sage-mathematics, but get error messages:

 sudo pacman -S sage-mathematics
 resolving dependencies...
 looking for inter-conflicts...

 Packages (1): sage-mathematics-6.2-2

 Total Installed Size:   2444.61 MiB

 :: Proceed with installation? [Y/n]
 (1/1) checking keys in keyring
 [###] 100%
 downloading required keys...
 :: Import PGP key 4096R/, Evgeniy Alekseev darkarca...@mail.ru,
 created: 2013-10-18? [Y/n]
 error: key Evgeniy Alekseev darkarca...@mail.ru could not be imported
 error: required key missing from keyring
 error: failed to commit transaction (unexpected error)
 Errors occurred, no packages were upgraded.


 What can I do to solve this problem?

Most likely you have wrong clock time at your computer
https://bbs.archlinux.org/viewtopic.php?id=149759


Re: [arch-general] gpg-agent: SSH_AGENT_FAILURE when adding an ECDSA key

2014-06-19 Thread Anatol Pomozov
Hi

On Fri, Jun 13, 2014 at 12:03 AM, Patrick Burroughs (Celti)
celticmad...@gmail.com wrote:
 On Thu, Jun 12, 2014 at 11:34 PM, Magnus Therning mag...@therning.org wrote:
 According to what I've found gpg-agent's ssh-agent should, as of
 version 2.0.21, support ECDSA keys, but still I can't add such a key:

 Am I doing something wrong here, or should I just use ssh-agent from OpenSSH
 instead (or stop using ECDSA keys)?

 ECDSA SSH keys in gpg-agent broke with libgcrypt 1.6+. You can get
 them working again by building gnupg from git.

I hit the same issue. Do you know what gnupg upstream commit fixes this problem?


Re: [arch-general] Network configuration

2014-06-13 Thread Anatol Pomozov
Hi

On Fri, Jun 13, 2014 at 1:39 AM, Yamakaky yamak...@yamaworld.fr wrote:
 Hi all

 I write you this mail because I'm a bit lost between all these network
 configuration tools available :

  - systemd-networkd
  - dhcpcd service
  - netctl
  - wpa_supplicant
  - NetworkManager/wicd

 There is two profiles I use now : a laptop (wifi auto-discover and connect
 with gui tray and easy to add network, ethernet auto-connect) and a
 raspberry py server (low ressources, ethernet only, dhcp configured, config
 not often changed). Actually, I use nm on my laptop (it's much much better
 than wicd) and dhcpcd on my raspberry pi. What would you use and why ?

As it was said above it is matter of personal preference. Personally I
try to use the simplest possible tools that do its job. And I switched
all my machines to systemd-networkd (+wpa_supplicant for wifi).
Network/WIFI/DHCP work great, no complains.

My advice it to start with systemd-networkd and only if it does not
work for you then look at other alternatives.
https://wiki.archlinux.org/index.php/Systemd-networkd#Basic_DHCP_network


Re: [arch-general] Kernel updated to 3.14.5-1. Now my Lenovo IdeaPad hangs on boot...

2014-06-03 Thread Anatol Pomozov
Hi

On Tue, Jun 3, 2014 at 8:05 AM, Manuel Reimer
manuel.s...@nurfuerspam.de wrote:
 Hello,

 I use gummiboot to boot my Notebook. So far all kernel updates worked well
 and I never got any problems, but for some reason the update to 3.14.5 now
 causes my system to no longer boot up.

 Can someone help me to find the reason for the problem and to get my
 Notebook to boot up again?

To recover your machine:
1) Boot from Arch ISO. I always have an USB pan with Arch image and
found it useful for emergency cases.
2) Find your system partitions. Use 'lsblk' for this
3) mount your system partition, e.g. 'mkdir system; mount /dev/sda1 system'
4) arch-chroot into your system: 'arch-chroot system'
5) fix your system e.g. downgrade kernel to previous version 'downgrade linux'
6) reboot and enjoy


Re: [arch-general] Kernel updated to 3.14.5-1. Now my Lenovo IdeaPad hangs on boot...

2014-06-03 Thread Anatol Pomozov
Hi

On Tue, Jun 3, 2014 at 8:38 AM, Manuel Reimer
manuel.s...@nurfuerspam.de wrote:
 On 06/03/2014 05:16 PM, Anatol Pomozov wrote:

 To recover your machine:
 1) Boot from Arch ISO. I always have an USB pan with Arch image and
 found it useful for emergency cases.
 2) Find your system partitions. Use 'lsblk' for this
 3) mount your system partition, e.g. 'mkdir system; mount /dev/sda1
 system'
 4) arch-chroot into your system: 'arch-chroot system'
 5) fix your system e.g. downgrade kernel to previous version 'downgrade
 linux'
 6) reboot and enjoy


 Did so, now. Now I'm on 3.14.4, again. System boots without any problems.

 But why can't I use the current kernel? Does someone here successfully boot
 3.14.5 via efistub? Bug in kernel? Or maybe bug in kernel configuration?

You provided zero information about your problem so it is hard to tell
what is going on there. It might be a kernel bug, here is a changelog
since 3.14.4  https://www.kernel.org/pub/linux/kernel/v3.0/ChangeLog-3.14.5
check if you see changes related to your issue.


Re: [arch-general] Setting dirty ratio

2014-05-26 Thread Anatol Pomozov
Hi

On Sun, May 25, 2014 at 11:45 PM, Amal Roy a...@cryptolab.net wrote:
 What is the recommended way to set dirty ratio and dirty background ratio on
 boot?

As usual the answer can be found in wiki
https://wiki.archlinux.org/index.php/Sysctl#Virtual_memory

 I added those in /etc/sysctl.d/99-sysctl.conf fileand then added a systemd
 service to fork a process sysctl --system

You don't need to create a systemd service, the one already exists

http://www.freedesktop.org/software/systemd/man/systemd-sysctl.service.html

 on startup and it didn't seem to
 work. The command sysctl --system run as root is successful in setting the
 custom options set in /etc/sysctl.d/99-sysctl.conf.


Re: [arch-general] VMware 10.0.2, kernel 3.14.2: recompiling vmware modules modules

2014-05-02 Thread Anatol Pomozov
Hi

On Fri, May 2, 2014 at 9:54 AM, Mihamina Rakotomandimby
mihamina.rakotomandi...@rktmb.org wrote:
 Hi all,

 As reported by the forum thread
 https://bbs.archlinux.org/viewtopic.php?pid=1410987 I encounter the same
 problem.

 Unfortunately, the topic has been closed because of trolling, but would you
 know a quick workaround (staying with this kernel)?

 I need VMware workstation for work too :-)


The issue should be fixed now.

The problem was that kernel 3.14.2 was compiled with gcc 4.9.0 + new
compile flags. You was trying to compile kernel modules with old gcc.
Now gcc 4.9.0 is moved to stable and the problem should be resolved.
Could you please try?


Re: [arch-general] Android support in Linux Arch

2014-04-23 Thread Anatol Pomozov
Hi,

As there is no strong consensus on what to do with Android development
tools then I am going to leave the situation as-is. Arch users will
keep either installing the packages using Android installer or by AUR
packages.

I am going to move packages android-tool and android-udev to
[community]. These are small packages that many people find useful.


[arch-general] Android support in Linux Arch

2014-04-16 Thread Anatol Pomozov
In my TU application I promised to look at situation with Android
support in Arch.

Android is an open-source project that has a number of sub-projects.
The official website offers prebuild binaries for those sub-projects
such as sdk, ndk, build-tools, IDE plugins,... We want to simplify
Android installation and maintenance and looking into turning it into
real packages. Installing tools by downloading and unpacking is sux
and so 90s.

The naive way to add packages is to use these binaries and just repack
them into Arch packages. Or download-at-installation - something like
what we do for proprietary software. AUR has many android packages
that download binary files from Android website. But using prebuilt
binary packages is something atypical for Arch. There are issues with
it:

- android download site provides only 32-bit binaries so multilib
repositories are required
- prebuild packages do not use standard installation paths
- they compile against older version of third-party projects and
bundled it into download files. The packages should depend on existing
system libraries instead.

The plan is to provide proper Arch packages that do not have the
problems listed above.

I initially tried to build a package that follows build whole Android
source-tree official instructions [1]. And then pack only parts we
need. Oh boy, this was painful. The source tree is several Gb of code,
it pulls prebuilt versions of gcc, clang, python,.. It requires
specific version of make and java. After a week of trying to build
Android on Arch without using prebuilds I gave up.

Actually those instructions [1] build system images that include
kernel and ARM code for a specific hardware. Arch does not need it.
I've decided to attack this problem from other side: find what exactly
we need to pack as Arch packages and then build it from sources.

I've decided to look to AUR non-binary android packages and see what
people vote for (who said that voting is useless?). There are a number
of packages that seems good for moving to [community]:

- android-tools https://aur.archlinux.org/packages/android-tools/ It
contains adb and fastboot. These the packages that I am interested in.
I am not an Android developer I am just an Android user and adb is the
most useful Android tool for me.

- android completion for adb/fastboot
https://aur.archlinux.org/packages/android-bash-completion/  these
files can be merged into previous package.

- android-udev https://aur.archlinux.org/packages/android-udev/ sounds
useful as well

These 3 packages is easy to move to [community] and my gut feeling
says it should be enough for the most Android users.


These is another category of users: those who *develop* for Android
and need a lot of other tools, such as:

sdk
sdk-build-tools: aapt, aidl, dexdump, dx, llvm-rs-cc
ndk
api docs
IDE plugins?
system images?

I did not find source packages for it at AUR. So if we want to add it
[community] we need to create such packages but it might be quite
hard.

Are there people with Android development background? What exactly do
you miss in Arch? Is it worth building our own SDK/eclipse
plugins/...?


[1] http://source.android.com/source/building-running.html


Re: [arch-general] Heartbleed-bug in OpenSSL 1.0.1 up to 1.0.1f

2014-04-08 Thread Anatol Pomozov
Hi

On Tue, Apr 8, 2014 at 8:29 AM, Neal Oakey neal.oa...@googlemail.com wrote:
 Hi,

 there is an Bug(1) in OpenSSL 1.0.1 and as far as I'm informed this has
 only been patched in 1.0.1g.
 Many other Distributions have build there own patch, what is with us?

It is fixed already. The new version of openssl is in stable
repository already.
https://www.archlinux.org/packages/core/x86_64/openssl/

 Currently we have 1.0.1.f-2 which is effected as far as I can know.


Re: [arch-general] Heartbleed-bug in OpenSSL 1.0.1 up to 1.0.1f

2014-04-08 Thread Anatol Pomozov
Hi

On Tue, Apr 8, 2014 at 9:29 AM, Pierre Schmitz pie...@archlinux.de wrote:
 Am 08.04.2014 17:29, schrieb Neal Oakey:
 Hi,

 there is an Bug(1) in OpenSSL 1.0.1 and as far as I'm informed this has
 only been patched in 1.0.1g.
 Many other Distributions have build there own patch, what is with us?
 Currently we have 1.0.1.f-2 which is effected as far as I can know.

 Greetings
 Neal

 1) (sry, German)
 http://www.golem.de/news/sicherheitsluecke-keys-auslesen-mit-openssl-1404-105685.html

 I actually did push an updated package within 3 hours after the public
 announcement. I think that is pretty reasonable especially since we are
 not among the fortunate distros and companies that were notified
 beforehand.

Is there any secret security list for distros where such issues are
discussed/notified before a vulnerable gets public attention? If there
is one then Arch should be added there as well.


Re: [arch-general] Heartbleed-bug in OpenSSL 1.0.1 up to 1.0.1f

2014-04-08 Thread Anatol Pomozov
Hi

On Tue, Apr 8, 2014 at 8:32 AM, Anatol Pomozov anatol.pomo...@gmail.com wrote:
 Hi

 On Tue, Apr 8, 2014 at 8:29 AM, Neal Oakey neal.oa...@googlemail.com wrote:
 Hi,

 there is an Bug(1) in OpenSSL 1.0.1 and as far as I'm informed this has
 only been patched in 1.0.1g.
 Many other Distributions have build there own patch, what is with us?

 It is fixed already. The new version of openssl is in stable
 repository already.
 https://www.archlinux.org/packages/core/x86_64/openssl/

 Currently we have 1.0.1.f-2 which is effected as far as I can know.

One more tip: after you updated a system and installed new openssl
package you need to restart services that still use old version of
openssl. Here is one-liner (from [1]) that finds such applications for
you:

sudo lsof +c 0 | grep -w DEL | awk '1 { print $1 :  $NF }' | grep ssl

[1] 
https://wiki.archlinux.org/index.php/Pacman_Tips#Find_applications_that_use_libraries_from_older_packages


Re: [arch-general] [arch-dev-public] Upgrading Apache to 2.4

2014-03-10 Thread Anatol Pomozov
Hi

On Fri, Mar 7, 2014 at 1:10 AM, Sebastiaan Lokhorst
sebastiaanlokho...@gmail.com wrote:
 Thanks for taking the effort to finally update Apache!

 When trying to start Apache with PHP, I get the same error as Rene.

 Just to be clear, what is the recommended way to run Apache+PHP now? Will
 mod_php5 will still be supported?

Ok, it seems that main source of questions is php-apache package that
causes Apache is running a threaded MPM, but your PHP Module is not
compiled to be threadsafe.  You need to recompile PHP. error at
apache start.

The answer is that you need to switch apache MPM from default
mod_mpm_event to slower but mod_php-compatible mod_mpm_prefork.  See
more information in at wiki page
https://wiki.archlinux.org/index.php/LAMP#Troubleshooting
BTW kudos to our users who already updated wiki for Apache2.4!

And of course anyone is welcome to create a threadsafe version of
php-apache in AUR so it can be used with mpm_event.


Re: [arch-general] [arch-dev-public] Upgrading Apache to 2.4

2014-03-10 Thread Anatol Pomozov
Hi

On Mon, Mar 10, 2014 at 11:41 AM, ger...@gmail.com ger...@gmail.com wrote:
 On Mon, Mar 10, 2014 at 7:21 PM, Anatol Pomozov 
 anatol.pomo...@gmail.comwrote:

 Hi

 On Fri, Mar 7, 2014 at 1:10 AM, Sebastiaan Lokhorst
 sebastiaanlokho...@gmail.com wrote:
  Thanks for taking the effort to finally update Apache!
 
  When trying to start Apache with PHP, I get the same error as Rene.
 
  Just to be clear, what is the recommended way to run Apache+PHP now? Will
  mod_php5 will still be supported?

 Ok, it seems that main source of questions is php-apache package that
 causes Apache is running a threaded MPM, but your PHP Module is not
 compiled to be threadsafe.  You need to recompile PHP. error at
 apache start.

 The answer is that you need to switch apache MPM from default
 mod_mpm_event to slower but mod_php-compatible mod_mpm_prefork.  See
 more information in at wiki page
 https://wiki.archlinux.org/index.php/LAMP#Troubleshooting
 BTW kudos to our users who already updated wiki for Apache2.4!

 And of course anyone is welcome to create a threadsafe version of
 php-apache in AUR so it can be used with mpm_event.


 I've also had problems making nagios work under Apache 2.4. When I click on
 any sidebar link, instead of executing the CGI I'm presented with the
 download dialog to download the CGI file. I guess CGIs have stopped working
 after upgrading. I think mod_cgi does not exist for Apache 2.4, and none of
 the similarly-named mods (mod_fastcgi, mod_proxy_fcgi, mod_fastcgi) seems
 to be a drop-in replacement for mod_cgi.

 What is the recommended way to run CGIs, specifically those needed for the
 Nagios web interface, under Apache 2.4?

Update to version 2.4.7-2 (now it stable). It added missing modules to
the package: mod_cern_meta mod_cgi mod_ident mod_imagemap mod_lua
mod_proxy_html mod_xml2enc


Re: [arch-general] Problems of using pacman and updating the filesystem

2014-03-07 Thread Anatol Pomozov
Hi

On Fri, Mar 7, 2014 at 10:45 AM, Ralf Mardorf
ralf.mard...@alice-dsl.net wrote:
 On Fri, 2014-03-07 at 16:21 +, Paul Gideon Dann wrote:
 If the last time you updated was before 2012-11-04, there's a good
 chance you never made the switch to systemd, which will make things
 even harder for you.

 ... for several reasons, e.g. eth0 likely will become enp3s0.

The interface name depends on hardware configuration. See
http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
for more information.

Just type 'ip a' and you'll see all the network interface names.


Re: [arch-general] [arch-dev-public] Upgrading Apache to 2.4

2014-03-06 Thread Anatol Pomozov
Hi

On Wed, Feb 26, 2014 at 10:10 AM, Anatol Pomozov
anatol.pomo...@gmail.com wrote:
 Hi

 On Wed, Feb 26, 2014 at 10:01 AM, Alexander Rødseth rods...@gmail.com wrote:
 One suggestion is creating the Apache 2.4 PKGBUILD first, then talk to
 Jan de Groot.
 If he should not be interested in the endeavor, talk to another dev.

 Good news is that I work with Jan and other devs on pushing Apache 2.4
 to repos. In general they are very positive about this move.

 PKGBUILD is ready and once db5 todo is done I'll create Apache2.4 todo
 to rebuild the deps. So hopefully we'll see official apache 2.4
 package in [extra] some time soon. Stay tuned.

Apache 2.4 has been moved from [testing] to [extra] and now available
for everyone. Please update your setup, follow the migration
instructions https://httpd.apache.org/docs/trunk/upgrading.html and
report any problems.

Thanks everyone.


Re: [arch-general] [arch-dev-public] Upgrading Apache to 2.4

2014-03-06 Thread Anatol Pomozov
Hi

+ php maintainer

On Thu, Mar 6, 2014 at 3:05 PM, Rene Pasing r...@pasing.net wrote:
 Hi Anatol,

 On 03/06/2014 10:48 PM, Anatol Pomozov wrote:
 Apache 2.4 has been moved from [testing] to [extra] and now available
 for everyone. Please update your setup, follow the migration
 instructions https://httpd.apache.org/docs/trunk/upgrading.html and
 report any problems. Thanks everyone.

 I get the following error:

 Mar 06 23:59:34 VAIO-ARCH apachectl[697]: [Thu Mar 06 23:59:34.072638
 2014] [:crit] [pid 699:tid 139754760394624] Apache is running a threaded
 MPM, but your PHP Module is not compiled to be threadsafe.  You need to
 recompile PHP.
 Mar 06 23:59:34 VAIO-ARCH apachectl[697]: AH00013: Pre-configuration failed
 Mar 06 23:59:34 VAIO-ARCH systemd[1]: httpd.service: control process
 exited, code=exited status=1
 Mar 06 23:59:34 VAIO-ARCH systemd[1]: Failed to start Apache Web Server.
 Mar 06 23:59:34 VAIO-ARCH systemd[1]: Unit httpd.service entered failed
 state.

 I will switch to php-fpm soon, but nevertheless... Just wanted you to be
 aware... ;-)

Sounds like a bug to me. Mind filing one at bugs.archlinux.org?

Pierre, I found some information in Google+ related to apache24
package in aur: The only thing was that I had to recompile PHP to be
thread-safe. It was '--enable-zts' parameter, I think The flag
sounds like a solution for this bug.


Re: [arch-general] [arch-dev-public] systemd 209 in [testing]

2014-02-21 Thread Anatol Pomozov
Hi

On Fri, Feb 21, 2014 at 9:07 AM, Genes Lists li...@sapience.com wrote:


 To be sure I rebopted - and still have same problem - no suspend on lid
 close - logs just say lid close / lid open as before. so the event is
 recognized but no suspend.

 Anything else I can try short of going back to systemd 208?

The best thing you can do is to keep working on this issue with
systemd team [1]. If this is a bug then it should be fixed upstream
and a new version of Arch package should be shipped.

[1] http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [arch-general] journalctl and I/O errors

2014-02-05 Thread Anatol Pomozov
Hi

I agree on both points.

On Wed, Feb 5, 2014 at 10:03 PM, Sébastien Leblanc
leblancse...@gmail.com wrote:
 Conclusion (as I understand it):

 1. There is definitely a bug in Journalctl: it crashes (segfaults) on I/O
 errors.

A few months ago I had a problem with btrfs. I set +C attribute
(disable copy-on-write) on existing journal files. Btrfs recommends to
put the attribute on empty files and seems was confused that I applied
it to non-empty files. Btrfs started returning IO error when I was
trying to read the file with 'less' and journalctl started crashing
with segfault. It is very similar to what being discussed here. And I
agree that journalctl should play nicer.

If anyone still sees this problem please run 'strace journalctl ...'.
If it shows that a filesystem operation returns IO error right before
SEGFAULT then it proves current thesis.

 2. You have a drive that is failing, or your BIOS might not be set
 correctly. This is causing the I/O errors. How large is the drive? You
 might have to turn off settings such as SATA legacy compatibility or the
 like -- I had a 3TB drive that would cause ATA command errors on a ~2006
 computer; I found this option in the BIOS and as soon as I turned it off
 everything worked perfectly.

 Although journalctl should not crash on I/O errors, I think it is not
 unreasonable to assume that many other apps do not tolerate I/O errors
 either. So I would say: you should still report this bug upstream.

 --
 Sébastien Leblanc


Re: [arch-general] Packages Verified with MD5

2014-01-12 Thread Anatol Pomozov
Hi,

I believe the topic stater has concerns about weakness of the MD5 hash
algorithm. He suggests to deprecate md5sums=() and use cryptographic
hash algorithm like SHA256. Personally I avoid MD5 in my packages
because of its bad reputation. But I am not an crypto expert though.


 I have been assuming the former, that when I do pacman -S firefox or pacman 
 -S truecrypt, it runs the PKGBUILD on *my* system. Is that not the case?
No. Both firefox and truecrypt are distributed as binary packages.
PKGBUILD is used by maintainer only at the build time. From other side
AUR packages are always built on your machine.
md5sums=() checks that the *source* files downloaded from internet are
correct. MITM attack is still possible here.


Re: [arch-general] Error install blink-darcs

2014-01-10 Thread Anatol Pomozov
Hi

On Fri, Jan 10, 2014 at 2:39 PM, Maykel Franco maykeldeb...@gmail.com wrote:
 2014/1/10 Karol Blazewicz karol.blazew...@gmail.com:
 On Fri, Jan 10, 2014 at 10:09 PM, Maykel Franco maykeldeb...@gmail.com 
 wrote:
 thanks

 2014/1/10 Mark Lee m...@markelee.com:
 On Fri, 2014-01-10 at 22:02 +0100, Maykel Franco wrote:
 I cannot install blink-darcs in archlinux...Can I help me please??

What it means is that the package is broken.

Usually you report this kind of problems at AUR page, but the author
is inactive (a few of his packages were recently disowned). In this
case the best solution is to request 'disown' for this package and fix
it by yourself or wait when somebody adopt and fix it for you (usually
it happens quickly for popular packages).


[arch-general] Ruby gem packages in Arch

2014-01-10 Thread Anatol Pomozov
Hi everyone

I manage a lot of Ruby packages (~230) in AUR, updated ~150 of them
recently. I would like to share my experience with herding these
packages. Some of the issues might be similar to other language
package systems (cpan, pip, nodejs).

First it worth mention that Ruby has its own packages manager
(rubygems). It is the standard and nearly all ruby software
distributed via rubygems. Rubygems has its package specification that
includes information like project homepage, developer contacts,
license, information about native dependencies. It makes sense to
parse those specifications and use it for PKGBUILD maintenance. The
idea is to have a script that will create ruby packages, will bump
versions, update dependencies if needed. And this scriptability is
important - copy/pasting ruby gems information is boring and
error-prone. There are several scripts that perform gem-arch
conversion and I work on one of them called 'gem2arch' [1].

From other side rubygems differ from pacman that sometimes make it
harder to match packages. The main difference is that rubygems has
very flexible dependency mechanism. Several versions of a package can
be installed at the same time. Another package can specify custom
dependency restriction. It can depend on a specific version e.g.
'=4.0.2' or it can ask for a range of versions, or any crazy set of
restrictions like 'provide me package FOO with version between 2.3 and
3.1.4 but excluding 2.8 and 3.0.2'. The most popular type of
dependency restriction is 'approximately greater' aka '~'. '~3.0.3'
means 'give me package with version 3.0.XXX where XXX equal or larger
than 3'. '~' is quite popular mechanism to stick to a specific
version range with stable API. Because rubygem can have several
versions of the package installed such restrictions do not block other
packages from using more recent versions. Thus package FOO might use
version 2.x while BAR uses 3.x and everyone is happy.

This dependency mechanism is different from the Arch where the
preferred form of dependency is 'use HEAD version'. Several versions
can be installed in Arch by adding version suffix to the package name
e.g. ruby1.8 or apachae2.2. But versioned packages is an exception
rather than rule in Arch. In rubygems using non-HEAD version is normal
and widely-used practice. ~20% of my ruby packages are versioned one,
e.g. ruby-rail-2 ruby-tilt-1. ruby-rails-2 means the latest version of
rails 2.xx and ruby-tilt-1 is the latest version of tilt 1.yy.
Dependency version calculation might be tricky in case of complex
dependency restrictions, e.g. foo-2.2.1 might be either 'foo',
'foo-2', 'foo-2.2' or 'foo-2.2.1' depending on what other released
version 'foo' has. Doing these calculations manually might be tricky
but thanks to 'gem2arch' it makes things easier.

In general adding/updating packages is a simple task (thanks to
gem2arch). Only small number of packages require custom modifications
like source patching or adding native dependencies.

Emphasizing importance of scripting I would like to mention a rule
that makes scripting harder. Ruby language page [3] says For
libraries, use ruby-$gemname. For applications, use the program name.
How to calculate this rule in a script? Is a file in /usr/bin enough
to tell that this is an app? Actually a lot of the ruby libraries can
be used both as command-line application and a library (rake, rdoc,
rubygems, erubis, nokogiri, ) it is safe to tell that all packages
in rubygems are libraries. If so I propose to change this rule to all
ruby packages should be named ruby-$gemname. Other ruby users like
this idea [2].

Also some maintainers try to make package names nicer and do not
follow ruby-$gemname rule. For example 'rubyirc' gem was packed as
ruby-irc Arch package. It is harder for a script to match the gem to
this package. Also there was a problem when another gem called 'irc'
appeared, this one is actually should be used for 'ruby-irc' package.
So I propose another rule for Ruby (and other languages) - follow
'ruby-$gemname' even if gemname already contains a mention of 'ruby'.

To negative side of ruby packaging in AUR I can add bad situation with
inactive maintainers. The ruby packages are spread over many
maintainers most of which are inactive and do not update packages.
Ruby packages in AUR stay out-of-date for many months. In general I
think that for an active ruby development that does not require Arch
dependencies 'gem installed in $HOME' is better way to go.


[1] https://github.com/anatol/gem2arch
[2] https://wiki.archlinux.org/index.php/Talk:Ruby_Gem_Package_Guidelines
[3] https://wiki.archlinux.org/index.php/Ruby_Gem_Package_Guidelines


Re: [arch-general] Default value of j in makeflags of makepkg.conf

2014-01-03 Thread Anatol Pomozov
Hi

On Fri, Jan 3, 2014 at 6:55 AM, Paul Gideon Dann pdgid...@gmail.com wrote:
 On Friday 03 Jan 2014 15:33:05 Martti Kühne wrote:
 Because I have a strong opinion about this. Also to prevent people
 from running into this who are not that experienced in making things
 work.

 If someone makes more than a few packages, they will have encountered 
 makepkg.conf, to
 at least set their e-mail address.  When I started using Arch, I think I 
 discovered
 makepkg.conf and added the -j to makeflags pretty much on day one of 
 experimenting with
 PKGBUILDs.  But I think it comes down to this:

 1) If someone knows that the -j flag exists, it won't take them long to 
 figure out how to add
 it to makeflags, and then the responsibility is with them to ensure they know 
 it can (rarely!)
 break some builds.

 2) If the -j flag is added by default, builds may break unpredictably, and 
 users will not know
 why.  They may not be aware of -j, and may not make the connection to 
 makepkg.conf at all.

 Option 1 seems a safer default to me.  However, I think this should be 
 properly documented
 in makepkg.conf: there should be an actual suggestion to add -j, along with a 
 warning that
 in rare cases it may cause breakage.  Just a single-line comment, possibly 
 with a link to the
 wiki, would be enough.

But there always will be people who uses -jN (e.g. me). If we decide
to keep broken PKGBUILD in AUR forever then it means sooner or later
-jN people will be hit by this issues. So the choice is really:

1) Keep the broken packages forever and care only about -j1 people
(who is majority now).
2) Make -jN by default. It will speedup compilation but it also make
the broken packages more visible.

IMHO #2 is better. It is better to highlight all the broken PKGBUILD
and fix it thus make it working for everyone.


Re: [arch-general] apache 2.4

2013-12-03 Thread Anatol Pomozov
Hi

 Exactly.  AFAIK, we have no-one interested in maintaining apache-2.4.
 I'm sure we could have apache22 and apache (2.4) otherwise.

If no-one from core developers wants to maintain this package could
you please move apache and modules to community repo? There are TU who
will help to maintain this. We already have another popular http
server (nginx) that is successfully maintained by community and Apache
should be fine there as well.


Re: [arch-general] apache 2.4

2013-12-02 Thread Anatol Pomozov
Hi,

This situation with apache-2.4 reminds me recent saga with libxml2
update. libxml2 was marked out-of-date for 9 months and maintainer
ignored requests about upgrading the package. The only explanation was
if maintainer does not upgrade the package there must be a good
reason for it - new version probably breaks other apps. But it end up
that the new libxml2 package did not break anyone and upgrade was very
simple - it was just a version bump and no dependencies rebuild was
needed. I made a conclusion that maintainer just lost interest in
supporting libxml2.

Could it be the same situation with apache-2.2 package? If the
maintainer lost interest would it be better to drop Apache to
'community' repo where it has higher chance to be upgraded? IMHO it is
shame for Arch to keep old versions of software without clear
explanation, 2.4.1 was released almost 2 years ago!


Re: [arch-general] apache 2.4

2013-12-02 Thread Anatol Pomozov
Hi

On Mon, Dec 2, 2013 at 12:06 PM, Leonid Isaev lis...@umail.iu.edu wrote:
 On Mon, 2 Dec 2013 11:32:13 -0800
 Anatol Pomozov anatol.pomo...@gmail.com wrote:

 Hi,

 This situation with apache-2.4 reminds me recent saga with libxml2
 update. libxml2 was marked out-of-date for 9 months and maintainer
 ignored requests about upgrading the package. The only explanation was
 if maintainer does not upgrade the package there must be a good
 reason for it - new version probably breaks other apps. But it end up
 that the new libxml2 package did not break anyone and upgrade was very
 simple - it was just a version bump and no dependencies rebuild was
 needed. I made a conclusion that maintainer just lost interest in
 supporting libxml2.

 What exactly are you complaining about?
What I am trying to say is that keeping software up-to-date is one of
the main maintainer's responsibilities. Especially in Arch Linux that
strives to stay bleeding edge, and typically offers the latest stable
versions of most software (quote from
https://wiki.archlinux.org/index.php/Arch_Linux).

 Apache 2.2 is still supported
 upstream (2.2.26 was released on 11/16/2013 -- two weeks ago). Apache 2.4 is
 just another branch.
No, it is not just another branch. 2.4 is the latest stable version
recommended by upstream. 2.2 has status of legacy release.



 Could it be the same situation with apache-2.2 package? If the
 maintainer lost interest would it be better to drop Apache to
 'community' repo where it has higher chance to be upgraded? IMHO it is
 shame for Arch to keep old versions of software without clear
 explanation, 2.4.1 was released almost 2 years ago!

 Apache 2.2.15 was pushed in 07/2013. This situation hardly qualifies as lost
 interest. If you desperately need 2.4.7 and are absolutely sure that it is
 compatible with 2.2 why not just compile it yourself?

 Cheers,
 --
 Leonid Isaev
 GnuPG key: 0x164B5A6D
 Fingerprint: C0DF 20D0 C075 C3F1 E1BE  775A A7AE F6CB 164B 5A6D


Re: [arch-general] Initramfs fallback render

2013-11-15 Thread Anatol Pomozov
Hi

On Thu, Nov 14, 2013 at 4:55 PM, Ismael Bouya
ismael.bo...@normalesup.org wrote:
 Hi all,

 I have always learnt that it was good practice (to use sudo instead of root
 su and), when we use sudo, to completely disable root login (by disabling
 his password).

In fact disabling root password does not completely prevent a user
from logging as root. There are other ways to authenticate e.g. SSH
keys (it assumes sshd.config did not disable root login).

The correct way to disable root completely is to make it expired
usermod --expiredate DATE_IN_PAST root. I tried it on my machine and
found that pacman is broken. I believe it uses su before running
install scripts.




 However when we need to boot into fallback mode, initramfs asks for root
 password! Is there a standard/automated way to ask/permit another user via
 initramfs in Archlinux?

 If not, how do you deal with that usually?


 Thanks in advance for your response!

 Regards,
 --
 Ismael


Re: [arch-general] Initramfs fallback render

2013-11-15 Thread Anatol Pomozov
Hi

On Fri, Nov 15, 2013 at 7:02 AM, Thomas Bächler tho...@archlinux.org wrote:
 Am 15.11.2013 15:55, schrieb Anatol Pomozov:
 The correct way to disable root completely is to make it expired
 usermod --expiredate DATE_IN_PAST root. I tried it on my machine and
 found that pacman is broken. I believe it uses su before running
 install scripts.

 Nothing about disabling the root account is correct.

Disabling root account is typical practice on multi-user machines.
sudo is much better solution as it allows fine-grained control to
super-user abilities.

 If you disable
 the account, both 'su' and 'sudo' cannot function. You _need_ the root
 account.

--expiredate differs from disabling login that --expiredate does
not allow to sudo su and does not allow any other authentication
method (such as ssh keys). Note that sudo foo still works even if
root account is expired (sudo ignores expiration date of the
destination account).


Re: [arch-general] How to show (kernel) messages by journalctl?

2013-10-14 Thread Anatol Pomozov
Hi

On Sat, Oct 12, 2013 at 6:06 PM, Chris Down ch...@chrisdown.name wrote:
 On 2013-10-13 01:51, Ralf Mardorf wrote:
 [rocketmouse@archlinux ~]$ journalctl -k
 -- Logs begin at Wed 2013-08-28 22:06:09 CEST, end at Sat 2013-10-12 
 20:36:06 CEST. --

 Your user needs to have the right privileges to view kernel messages, or run 
 as
 root.

Or to be a member of 'systemd-journal' UNIX group. Quoting the man
page http://www.freedesktop.org/software/systemd/man/journalctl.html

All users are granted access to their private per-user journals.
However, by default, only root and users who are members of the
systemd-journal group get access to the system journal and the
journals of other users.


 chris@gopher:~$ journalctl -k | wc -l
 1
 chris@gopher:~$ sudo journalctl -k | wc -l
 957


Re: [arch-general] Way too much kworkers

2013-10-01 Thread Anatol Pomozov
Hi

On Tue, Oct 1, 2013 at 2:30 PM, Dimitris Zervas dzer...@dzervas.gr wrote:
 Um, I have a very powerful pc, so I get no slow downs. i7-3820 with 16GB of
 ram.

So if your system works fine then I am not sure what you are complaining about.

On a bit more serious note. The threading in Linux kernel is quite
effective. A thread data takes just a few kibibytes of non-swappable
kernel memory. And 200 threads is nothing for modern server/desktop
machines. I saw servers with 30K threads that were doing absolutely
fine. And by the way - a of the kworkers are per-CPU so more (virtual)
CPU cores you have more kworkers threads you will see.

The problem with threading is not the number of all threads (when most
of the threads are sleeping and doing nothing), but *active* jumping
from one thread to another. Such behavior happens in networking
services where a thread starts, does some small amount of job then get
blocked because it doing a disk or database access, switches to
another task that does also a little of work and switches back. Such
behavior causes cache thrashing, TLB cache flushes and thus poor
performance. Some people try to fight with this problem by avoiding
thread switches using some kind of callback frameworks (a-la node.js)
or user-space threads (a-la goroutines in Go).

As I said if kernel threads are not active then they almost free for
your system. And there is a good reason why kernel started using
kworkers more actively. Imagine you write a kernel code that need a
delayed execution of two independent functions A() and B(). You can
start one thread and call A() and B() serially, but in this case if
A() is blocked then B() will not run until A() is done. If you start
two threads, then when A() will be blocked, CPU can switch execution
to B(). Thus your tasks will finish faster and the system becomes more
responsive. And most server/desktop users choose overall system
responsiveness even if it takes a little bit more memory.


 htop reports no CPU usage, but free -m reports some interesting memory
 consumption.
 free -m
 total   usedfree sharedbuffers
 cached
 Mem: 16030  15868161 0150  13831
 -/+ buffers/cache:   1886  14143
 Swap:0  0  0

 I know that used is for caching/etc., but 15GB is a bit too much.
 Downgrading the kernel is not an option because the kernel is patched and
 downgrading would take just too much.
 I know have 234 kworkers, without suspending.

This cache usage has noting to do with number of kworkers. This amount
is used by buffer cache. Buffer cache contains data read from slow
devices like disk to avoid reading it again (this is what cache for).
You can drop the cache by

sync; echo 3 | sudo tee /proc/sys/vm/drop_caches


Re: [arch-general] Upgrade problem

2013-09-30 Thread Anatol Pomozov
Hi

On Mon, Sep 30, 2013 at 4:43 AM, Phil Dobbin bukowskis...@gmail.com wrote:
 Hi, all.

 I'm a newcomer to Arch Linux so excuse my ignorance.

 I attempted to update yesterday (pacman -Syu)  I got this message at the
 end of the update:

 '(121/121) checking for file conflicts
 [#] 100%
 error: failed to commit transaction (conflicting files)
 filesystem: /bin exists in filesystem
 filesystem: /sbin exists in filesystem
 filesystem: /usr/sbin exists in filesystem
 Errors occurred, no packages were upgraded.'

 Can anybody tell me what it means  how to go about fixing it (I've got Arch
 running on a VPS).

 Many thanks,

 Cheers,

 Phil...

Follow these instructions
https://www.archlinux.org/news/binaries-move-to-usrbin-requiring-update-intervention/


Re: [arch-general] Git

2013-09-30 Thread Anatol Pomozov
Hi

On Mon, Sep 30, 2013 at 3:55 PM, Daniel Wallace
danielwall...@gtmanfred.com wrote:


 From: pdgid...@gmail.com
 To: arch-general@archlinux.org
 Date: Mon, 30 Sep 2013 10:32:26 +0100
 Subject: Re: [arch-general] Git

 On Monday 30 Sep 2013 05:13:57 Sebastian Schwarz wrote:
  On 2013-29-09, Tom Gundersen t...@jklm.no wrote:
   If we were to use git, we should have one git repository per
   package, and also provide one repository which includes all
   the packages as submodules.
 
  Why not use one branch per package and one branch per repository
  with the packages as submodules instead of a repository for each
  package?  This way all the packages would be in a single
  repository and could be fetched all at once or one at a time.

 If you had one package on each branch, cloning the repository would bring 
 down
 all of the packages together, because all of the branches in a git 
 repository are
 fetched when you clone.  Keeping unrelated code in different branches in the
 same repo is a bit weird in Git, and is not generally done; it almost always 
 makes
 more sense to use a separate repo for each code base.

 Paul

 You don't have to pull down all the branches at the same time.

 Right now I maintain my own sub patch set for packages that I want stuffed 
 added or removed to by useing git clone --single branch


 git clone --single-branch git://projects.archlinux.org/svntogit/packages.git 
 -b packages/git

 then when I want another package from extra or core, i can fetch it

 git fetch origin packages/git
 git checkout -b packages/git FETCH_HEAD

 and you can git pull --rebase from origin in the same way

I think it makes more sense to use branches for stable/testing
versions of the same sourcetree. Using branches to track different
projects is indeed unusual way to use git.


Re: [arch-general] [aur-general] systemd 207 and btrfs

2013-09-20 Thread Anatol Pomozov
Hi

On Thu, Sep 19, 2013 at 11:23 AM, Tom Gundersen t...@jklm.no wrote:
 On Thu, Sep 19, 2013 at 11:10 AM, Jameson imntr...@gmail.com wrote:
 On Wed, Sep 18, 2013 at 9:50 PM, Curtis Shimamoto
 sugar.and.scru...@gmail.com wrote:
 This is just a shot in the dark, but what if you were to put the necessary
 modules for btrfs in mkinitcpio.conf's MODULES list to have it loaded 
 explicitly
 and early?

 That did it.  I added btrfs, zlib_deflate, and libcrc32a.  I think I
 had previously needed to add crc32a for a different problem I was
 having with multi-device btrfs volumes.

 For what it is worth, the cause of this bug was that the creation of
 static device nodes (including /dev/btrfs-control) moved from
 systemd-udevd to systemd-tmpfiles, but we forgot to add a call to
 tmpfiles in the udev hook. This has now been fixed and the systemd in
 testing should work with btrfs without the need for adding any modules
 manually.

I consider it as a severe bug. Those who use btrfs on root filesystem
(like me) ended up with unbootable machine after this upgrade. Please
fix it in [core] as soon as possible.


Re: [arch-general] reading messages during shutdown

2013-07-04 Thread Anatol Pomozov
Hi,

On Thu, Jul 4, 2013 at 10:54 AM, F. Gr. frgroc...@gmail.com wrote:
 Hi,
 I've noticed that there are some warning during the halt/reboot
 process. How can I read these messages?

All the logs are saved and you can see them using journalctl tool. Run

$ journalctl -a -r

it will show logs in the reverse order. Scroll till previous shutdown time.


Re: [arch-general] vi just terminates on a 32bit machine

2013-06-16 Thread Anatol Pomozov
Hi

On Sun, Jun 16, 2013 at 2:21 AM, Manuel Reimer
manuel.s...@nurfuerspam.de wrote:

 Hello,

 if I access one of my systems via ssh and try to use vi there, then it 
 immediately returns with exit status 1.

 System is an up-to-date 32bit ArchLinux system.

 I've attached the strace output of my try to run vi to this mail.

 Can someone see there what could have happened? Reinstalling vi didn't fix 
 this for me...

 Thank you very much in advance

strace has following lines near the end

write(1, \33[?1049h\33[39;1H\/var/tmp\ , 26) = 26
write(1, \33[7mValue too large for defined ..., 41) = 41
write(1, \33[27m\n, 6)= 6

It cut the second part of the message Value too large for
defined Just run vi without strace and check vi output/logs, most
likely the error will be self-descriptive.


Re: [arch-general] Super weird dd problem.

2013-06-10 Thread Anatol Pomozov
Hi

On Mon, Jun 10, 2013 at 1:22 AM, Thomas Bächler tho...@archlinux.orgwrote:

 Am 10.06.2013 05:18, schrieb Anatol Pomozov:
  sync is not a workaround, it is a right solution.

 You are wrong.

  Under the hood copying in linux works following way. Every time you read
  something from disk the file information will stay cached in memory
 region
  called buffer cache.

 That is true - on a mounted file system. Writing directly to a block
 device (like /dev/sdb) does not use the buffer cache in any way.


Raw device access *does* use buffer cache. You can easily check it by
watching writeback trace events:

# Create a loop block device
dd if=/dev/zero of=afile bs=1M count=1000
sudo losetup /dev/loop0 ./afile

# enable writeback trace kernel events
sudo echo 1  /sys/kernel/debug/tracing/events/writeback/enable
sudo echo 1  /sys/kernel/debug/tracing/tracing_on
# watch writeback events only for loop0 (7:0 is its major:minor)
grep 7:0 /sys/kernel/debug/tracing/trace_pipe

# Now perform raw block device write and watch for writeback events
sudo dd if=/dev/zero of=/dev/loop0 bs=1K



What you meant is direct I/O - in case of direct I/O operation bypasses
buffer cache. So if you run

sudo dd if=/dev/zero of=/dev/loop0 bs=1K oflag=direct

you will not see writeback events as expected.

But direct I/O is orthogonal to raw block device access.



  3) Call dd operation with conv=fsync flag, this tells that dd should
  not return until all data is written to the device.

 Again, fsync only affects files on a mounted file system, not raw block
 devices.


Re: [arch-general] Super weird dd problem.

2013-06-09 Thread Anatol Pomozov
Hi


On Sun, Jun 9, 2013 at 5:53 PM, Pedro Emílio Machado de Brito 
pedroembr...@gmail.com wrote:

 2013/6/9 Alfredo Palhares masterk...@masterkorp.net:
  Hello,
 
  So I was creating a archlinux usb bootable drive:
 
  [root@masterkorp-laptop Downloads]# dd bs=4M
 if=archlinux-2013.06.01-dual.iso of=/dev/sdb
  130+1 records in
  130+1 records out
  548405248 bytes (548 MB) copied, 0.964976 s, 568 MB/s
 
  I was like WOW, this was too fast! But nothing ever gets written to
  the pen drive.
  To add to the weird factor, a dd to dev/sdb1 (partition) works as it
  should, slowly. But then ofcourse the iso gets unbootable.
 
  The md5sum on the iso is correct.
  I tried with diferent pen drives.
 
  Please, any suggestions is welcome.†mo
 

 I've been having this sort of problems with removable storage lately
 (copying multiple GBs of songs in a few seconds, except not really).
 The workaround I found out is to run the sync command after copying.


sync is not a workaround, it is a right solution.

Under the hood copying in linux works following way. Every time you read
something from disk the file information will stay cached in memory region
called buffer cache. Next time you read the same information kernel it
will be served from RAM, not from disk. This speedups the read operation a
lot - reading from RAM ~10 faster than reading from disk [1].

buffer cache is used for write operations as well. When you write to disk
it is actually writes to to memory and operation reported as finished.
Moments later special process called writeback sends this data to disk.
This trick also allows to speedup the process. Of course it supposes that
underlying disk will not suddenly disappear (like in your case with USB
pen).

If you want to make sure that data is really written then you should do one
of the following things:

1) Unmount device correctly. Instead of just pulling the USB pen you should
do umount YOUR_DEVICE_NAME. umount flushes all dirty blocks to the
device.

2) Call sync (that flushes dirty buffers) and then plug/umount USB pen.

3) Call dd operation with conv=fsync flag, this tells that dd should
not return until all data is written to the device.

That could take some time as all your data is being actually written
 to disk.

 I believe this is related to the write cache, please let me know if
 you find a better solution to this.



 When I first learned DD, to create bootable disks, 1M was the suggested size
 because you could manage to miswrite with something larger. The caution
 was: Use 1M to restrict your speed so that it writes properly which I
 never actually understood, but never had a problem with. Only recently
 have I seen the suggestion to use 4M, but always with 1M as the fallback
 option if it doesn't work.


This statement does not make sense to me. Larger block is better because
you need to make less system calls. If large block miswrites data then it
is a bug (in kernel or driver) and should be reported to kernel maillist.

[1] http://highscalability.com/numbers-everyone-should-know


Re: [arch-general] Super weird dd problem.

2013-06-09 Thread Anatol Pomozov
Hi


On Sun, Jun 9, 2013 at 8:47 PM, Pedro Emílio Machado de Brito 
pedroembr...@gmail.com wrote:

 2013/6/10 Anatol Pomozov anatol.pomo...@gmail.com:
 
  sync is not a workaround, it is a right solution.
 
  Under the hood copying in linux works following way. Every time you read
  something from disk the file information will stay cached in memory
 region
  called buffer cache. Next time you read the same information kernel it
  will be served from RAM, not from disk. This speedups the read operation
 a
  lot - reading from RAM ~10 faster than reading from disk [1].
 
  buffer cache is used for write operations as well. When you write to
 disk
  it is actually writes to to memory and operation reported as finished.
  Moments later special process called writeback sends this data to disk.
  This trick also allows to speedup the process. Of course it supposes that
  underlying disk will not suddenly disappear (like in your case with USB
  pen).
 
  If you want to make sure that data is really written then you should do
 one
  of the following things:
 
  1) Unmount device correctly. Instead of just pulling the USB pen you
 should
  do umount YOUR_DEVICE_NAME. umount flushes all dirty blocks to the
  device.

 I always unmount my devices before unplugging, but I found out that
 after writing a lot of data to a slow device (SD card in my case),
 umount or udiskie-umount would block for a long time, then exit with
 some weird error message and the data would have not been correctly
 written.


Weird... What is the message?

 2) Call sync (that flushes dirty buffers) and then plug/umount USB pen.
 

 Yeah, I do that, sync then umount, but it bothers me that rsync
 --progress shows hundreds of MB/s or so for the first few... GBs? of
 files, then slows down to the actual speed of the device when the
 write cache fills up. And then I hit sync and it blocks for a few more
 minutes until everything is copied.


This sounds right. At the beginning kernel allocates memory for buffer
caches and puts dirty data there. Writeback sees it and starts writing this
data to the device, but device bandwidth is much smaller than speed you
provide the data. Sooner available memory ends and rsync has to wait
writeback that slowly flushes dirty memory pages to device and returns free
pages back to pool.

 3) Call dd operation with conv=fsync flag, this tells that dd should
  not return until all data is written to the device.
 

 Is there a similar flag for rsync? There is no fsync string in the
 manpage, and countless sync, for obvious reasons.
 Or a way to make the write cache smaller, or disable it entirely, for
 removable devices? I recall seeing such an option years ago in
 Windows.


You can mount your removable device with sync option. See man mount.


Re: [arch-general] Perl 5.18 in [testing]

2013-05-28 Thread Anatol Pomozov
Hi


On Mon, May 27, 2013 at 7:43 PM, Anatol Pomozov anatol.pomo...@gmail.comwrote:

 Hi


 On Tue, May 21, 2013 at 1:28 AM, Florian Pritz bluew...@xinu.at wrote:

 Hi,

 Perl 5.18, as any other new perl version, requires all modules that are
 not purely perl code to be rebuilt. We did that for all packages in our
 repos.

 For a list of upstream changes please refer to `man perldelta`.

 Since users probably installed some from AUR or with
 CPANPLUS::Dist::Arch, I wrote a script[1] that generates a local rebuild
 list.

 [1]: http://git.server-speed.net/bin/plain/find-broken-perl-packages.sh

  - raw.txt contains a list of files that generated an error
  - perl-modules.txt contains a list of modules the files belong to
  - perl-dists.txt contains a list of distributions
  - pacman.txt contains a list of pacman packages the files belong to

 Binaries linking with libperl.so will also need to be rebuilt. You can
 use lddd from devtools to find those.

 Please report any issue you encounter.


 I updated my system with pacman -Suy, it brought perl 5.18 update.

 Now one of perl apps that I use crashes. The app uses standard perl
 modules, no third-party native modules. Could it be because of the perl
 update?


 ===
 Perl API version v5.16.0 of Net::SSLeay does not match v5.18.0 at
 /usr/lib/perl5/site_perl/Net/SSLeay.pm line 370.
 Compilation failed in require at
 /usr/share/perl5/site_perl/IO/Socket/SSL.pm line 17.
 BEGIN failed--compilation aborted at
 /usr/share/perl5/site_perl/IO/Socket/SSL.pm line 17.
 Compilation failed in require at /usr/share/perl5/site_perl/Net/HTTPS.pm
 line 26.
 Can't locate Net/SSL.pm in @INC (you may need to install the Net::SSL
 module) (@INC contains: /usr/lib/perl5/site_perl /usr/share/perl5/site_perl
 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl
 /usr/lib/perl5/core_perl /usr/share/perl5/core_perl .) at
 /usr/share/perl5/site_perl/Net/HTTPS.pm line 30.
 Compilation failed in require at /usr/share/perl5/site_perl/LWP/Protocol/
 https.pm line 86.
 ..


Nevermind. This package comes from CPAN and compiled for previous perl
version. I removed all CPAN files and I am going to pacman-ize perm modules
that I use. My perl modules should use Net:SSLeay from packages rather pull
all changes via cpan.


Re: [arch-general] Perl 5.18 in [testing]

2013-05-27 Thread Anatol Pomozov
Hi


On Tue, May 21, 2013 at 1:28 AM, Florian Pritz bluew...@xinu.at wrote:

 Hi,

 Perl 5.18, as any other new perl version, requires all modules that are
 not purely perl code to be rebuilt. We did that for all packages in our
 repos.

 For a list of upstream changes please refer to `man perldelta`.

 Since users probably installed some from AUR or with
 CPANPLUS::Dist::Arch, I wrote a script[1] that generates a local rebuild
 list.

 [1]: http://git.server-speed.net/bin/plain/find-broken-perl-packages.sh

  - raw.txt contains a list of files that generated an error
  - perl-modules.txt contains a list of modules the files belong to
  - perl-dists.txt contains a list of distributions
  - pacman.txt contains a list of pacman packages the files belong to

 Binaries linking with libperl.so will also need to be rebuilt. You can
 use lddd from devtools to find those.

 Please report any issue you encounter.


I updated my system with pacman -Suy, it brought perl 5.18 update.

Now one of perl apps that I use crashes. The app uses standard perl
modules, no third-party native modules. Could it be because of the perl
update?


===
Perl API version v5.16.0 of Net::SSLeay does not match v5.18.0 at
/usr/lib/perl5/site_perl/Net/SSLeay.pm line 370.
Compilation failed in require at
/usr/share/perl5/site_perl/IO/Socket/SSL.pm line 17.
BEGIN failed--compilation aborted at
/usr/share/perl5/site_perl/IO/Socket/SSL.pm line 17.
Compilation failed in require at /usr/share/perl5/site_perl/Net/HTTPS.pm
line 26.
Can't locate Net/SSL.pm in @INC (you may need to install the Net::SSL
module) (@INC contains: /usr/lib/perl5/site_perl /usr/share/perl5/site_perl
/usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl
/usr/lib/perl5/core_perl /usr/share/perl5/core_perl .) at
/usr/share/perl5/site_perl/Net/HTTPS.pm line 30.
Compilation failed in require at /usr/share/perl5/site_perl/LWP/Protocol/
https.pm line 86.
..


Re: [arch-general] 'Check out-of-date packages' tool

2013-05-16 Thread Anatol Pomozov
Hi

On Tue, May 14, 2013 at 6:13 PM, Sébastien Luttringer se...@seblu.net wrote:
 On Sat, May 11, 2013 at 8:26 PM, Anatol Pomozov
 anatol.pomo...@gmail.com wrote:
 Hi everyone

 Per discussion in 'pacman-dev' maillist [1] I implemented a tool that tries
 to find Arch out-of-date packages. The tool scans PKGBUILD files is
 /var/abs directory, extracts download url and then tries to probe download
 urls for the next version. Next versions look like

 X.Y.Z+1
 X.Y+1.0
 X+1.0.0

 If any of the new versions presents on the download server it reports to
 user as 'new version available'.

 Here is the tool sources https://github.com/anatol/pkgoutofdate To make its
 usage even more pleasant I added it to AUR
 https://aur.archlinux.org/packages/pkgoutofdate-git/

 I use this[1] software since jully 2012 for my packages.
 Now, it handle comparaison against archweb, local pacman, abs tree, aur rpc
 or a local cache.
 You need to configure it[2] to check your packages, it's not automagic
 like yours by parsing abs tree.  But I don't want that :)

Two tools implemented to solve the problem of discovering out-of-date
packages indicates that this issue is important. Arch developer have
you though about adding such functionality into standard Arch toolkit?


Re: [arch-general] 'Check out-of-date packages' tool

2013-05-14 Thread Anatol Pomozov
Hi

On Mon, May 13, 2013 at 9:51 AM, Don deJuan donjuans...@gmail.com wrote:
 On 05/12/2013 03:21 PM, Anatol Pomozov wrote:
 Hi


 On Sat, May 11, 2013 at 1:25 PM, Don deJuan donjuans...@gmail.com wrote:

 On 05/11/2013 11:26 AM, Anatol Pomozov wrote:
 Hi everyone

 Per discussion in 'pacman-dev' maillist [1] I implemented a tool that
 tries
 to find Arch out-of-date packages. The tool scans PKGBUILD files is
 /var/abs directory, extracts download url and then tries to probe
 download
 urls for the next version. Next versions look like

 X.Y.Z+1
 X.Y+1.0
 X+1.0.0

 If any of the new versions presents on the download server it reports to
 user as 'new version available'.

 Here is the tool sources https://github.com/anatol/pkgoutofdate To make
 its
 usage even more pleasant I added it to AUR
 https://aur.archlinux.org/packages/pkgoutofdate-git/

 To use it please install pkgoutofdate-git package:

 $ yaourt -S pkgoutofdate-git

 Then update abs database and run tool itself:

 $ sudo abs  pkgoutofdate

 That's it. The result looks like

 ...
 closure-linter: new version found - 2.3.8 = 2.3.9
 perl-data-dump: new version found - 1.21 = 1.22
 wgetpaste: new version found - 2.20 = 2.21
 fillets-ng-data: new version found - 1.0.0 = 1.0.1
 tablelist: new version found - 5.5 = 5.6
 ..


 There are some fals positive and negative results though, mostly because
 download servers return different sort of weird responses. I still work
 on
 work-arounds for all these cases.

 Hope you find this tool useful and it will help to make Arch software
 even
 more bleeding edge.

 [1]

 https://mailman.archlinux.org/pipermail/pacman-dev/2013-March/thread.html#16850
 Does this only work for packages found in the ABS or will it also work
 for AUR packages one might be maintaining?

 Only ABS right now. I did it because it is easy to traverse files under
 /var/abs and parse them.

 AUR requires additional step on fetching PKGBUILD from server. It should be
 fairly easy to add it. What is the recommended way to fetch files from aur?
 Just 'wget https://aur.archlinux.org/packages/pk/pkgoutofdate-git/PKGBUILD'?
 I would do it that was as it seems the easiest way to get just the
 PKGBUILD.

 Or could you add a flag where we can pass a directory, say where we
 store PKGBUILDs for the AUR we maintain.

Added -d flag, now you can specify custom directory where to scan
for PKGBUILD files.


Re: [arch-general] 'Check out-of-date packages' tool

2013-05-12 Thread Anatol Pomozov
Hi


On Sat, May 11, 2013 at 1:25 PM, Don deJuan donjuans...@gmail.com wrote:

 On 05/11/2013 11:26 AM, Anatol Pomozov wrote:
  Hi everyone
 
  Per discussion in 'pacman-dev' maillist [1] I implemented a tool that
 tries
  to find Arch out-of-date packages. The tool scans PKGBUILD files is
  /var/abs directory, extracts download url and then tries to probe
 download
  urls for the next version. Next versions look like
 
  X.Y.Z+1
  X.Y+1.0
  X+1.0.0
 
  If any of the new versions presents on the download server it reports to
  user as 'new version available'.
 
  Here is the tool sources https://github.com/anatol/pkgoutofdate To make
 its
  usage even more pleasant I added it to AUR
  https://aur.archlinux.org/packages/pkgoutofdate-git/
 
  To use it please install pkgoutofdate-git package:
 
  $ yaourt -S pkgoutofdate-git
 
  Then update abs database and run tool itself:
 
  $ sudo abs  pkgoutofdate
 
  That's it. The result looks like
 
  ...
  closure-linter: new version found - 2.3.8 = 2.3.9
  perl-data-dump: new version found - 1.21 = 1.22
  wgetpaste: new version found - 2.20 = 2.21
  fillets-ng-data: new version found - 1.0.0 = 1.0.1
  tablelist: new version found - 5.5 = 5.6
  ..
 
 
  There are some fals positive and negative results though, mostly because
  download servers return different sort of weird responses. I still work
 on
  work-arounds for all these cases.
 
  Hope you find this tool useful and it will help to make Arch software
 even
  more bleeding edge.
 
  [1]
 
 https://mailman.archlinux.org/pipermail/pacman-dev/2013-March/thread.html#16850
 Does this only work for packages found in the ABS or will it also work
 for AUR packages one might be maintaining?


Only ABS right now. I did it because it is easy to traverse files under
/var/abs and parse them.

AUR requires additional step on fetching PKGBUILD from server. It should be
fairly easy to add it. What is the recommended way to fetch files from aur?
Just 'wget https://aur.archlinux.org/packages/pk/pkgoutofdate-git/PKGBUILD'?


Re: [arch-general] 'Check out-of-date packages' tool

2013-05-12 Thread Anatol Pomozov
Hi


On Sun, May 12, 2013 at 8:07 AM, Ross Lagerwall rosslagerw...@gmail.com wrote:

 On Sat, May 11, 2013 at 11:26:19AM -0700, Anatol Pomozov wrote:
  There are some fals positive and negative results though, mostly because
  download servers return different sort of weird responses. I still work on
  work-arounds for all these cases.
 

 Have you any ideas how to avoid showing development releases as new
 releases? For example, Perl 5.17 shows as a new release but AFAIK it is
 only a development release...


Currently pkgoutofdate does not have such functionality. Different
projects have different release strategies. Some projects use -betaXXX
for development releases, some use odd minor numbers in version. These
information is project specific and should be stored in PKGBUILD.

I proposed a solution in the FS#34447 [1]. The idea is that in case if
a project has specific release cycle then PKGBUILD will contain
additional function, let's say released_version(), this function
should provide pkgoutofdate recommended version.

For example for project less http://www.greenwoodsoftware.com/less/
the function will go to homepage, fetch its text, parse for ancor word
and get the version:

released_version() {
  curl -s http://www.greenwoodsoftware.com/less/ | grep -Po The
current released version is less-(\d+) | grep -Po \d+
}

This function returns number from homepage (currently 458). The number
compared with version in PKGBUILD and it is smaller (or just
different) then package considered out of date.

This solution is somewhat similar to macport's livecheck implementation [2].

As I said this requires only for minority of projects. Most projects
are just fine with the next version algorithm described in the first
message.


https://bugs.archlinux.org/task/34447
http://guide.macports.org/chunked/reference.livecheck.html


[arch-general] 'Check out-of-date packages' tool

2013-05-11 Thread Anatol Pomozov
Hi everyone

Per discussion in 'pacman-dev' maillist [1] I implemented a tool that tries
to find Arch out-of-date packages. The tool scans PKGBUILD files is
/var/abs directory, extracts download url and then tries to probe download
urls for the next version. Next versions look like

X.Y.Z+1
X.Y+1.0
X+1.0.0

If any of the new versions presents on the download server it reports to
user as 'new version available'.

Here is the tool sources https://github.com/anatol/pkgoutofdate To make its
usage even more pleasant I added it to AUR
https://aur.archlinux.org/packages/pkgoutofdate-git/

To use it please install pkgoutofdate-git package:

$ yaourt -S pkgoutofdate-git

Then update abs database and run tool itself:

$ sudo abs  pkgoutofdate

That's it. The result looks like

...
closure-linter: new version found - 2.3.8 = 2.3.9
perl-data-dump: new version found - 1.21 = 1.22
wgetpaste: new version found - 2.20 = 2.21
fillets-ng-data: new version found - 1.0.0 = 1.0.1
tablelist: new version found - 5.5 = 5.6
..


There are some fals positive and negative results though, mostly because
download servers return different sort of weird responses. I still work on
work-arounds for all these cases.

Hope you find this tool useful and it will help to make Arch software even
more bleeding edge.

[1]
https://mailman.archlinux.org/pipermail/pacman-dev/2013-March/thread.html#16850


Re: [arch-general] Current CPPFLAGS=-D_FORTIFY_SOURCE=2 break some builds

2013-05-07 Thread Anatol Pomozov
Hi, Allan


On Mon, May 6, 2013 at 3:34 PM, Allan McRae al...@archlinux.org wrote:

 On 07/05/13 06:20, Leonid Isaev wrote:
  On Mon, 6 May 2013 16:01:30 -0400
  Eric Bélanger snowmanisc...@gmail.com wrote:
 
  On Mon, May 6, 2013 at 3:45 PM, Leonid Isaev lis...@umail.iu.edu
 wrote:
 
  Hi,
 
  With gcc 4.8.0-4 I can no longer build core/links package from
 ABS,
  with SSL support. The issue is _not_related to makepkg (as I originally
  thought), even plain ./configure fails if I export
  CPPFLAGS=-D_FORTIFY_SOURCE=2, regardless of the content of
 {C,CXX,LD}FLAGS.
  Here is the error:
  
  $ ./configure --with-ssl
  [ ... ]
  checking for openssl... yes
  checking OPENSSL_CFLAGS...
  checking OPENSSL_LIBS... -lssl -lcrypto
  checking for OpenSSL... no
  checking for OpenSSL... no
  configure: error: OpenSSL not found
  $ cat config.log
  [ ... ]
  configure:8095: checking for openssl
  configure:8102: checking OPENSSL_CFLAGS
  configure:8107: checking OPENSSL_LIBS
  configure:8139: checking for OpenSSL
  configure:8150: gcc -o conftest -g -O2 -D_FORTIFY_SOURCE=2   conftest.c
  -lssl
  -lcrypto  -lm  15
  In file included from configure:8143:0:
  confdefs.h:8:16: error: duplicate 'unsigned'
   #define size_t unsigned
  ^
  configure: failed program was:
  #line 8143 configure
  #include confdefs.h
  #include openssl/ssl.h
  int main() {
  SSLeay_add_ssl_algorithms()
  ; return 0; }
  
 
  With gcc 4.7.2 all builds fine with Arch's default makepkg.conf, i.e.
 no
  duplicate unsigned error. Also, unsetting CPPFLAGS allows a
 successfull
  build.
 
  Since core/links has been successfully rebuilt, what was the gcc
 version?
  ANd
  can anyone else confirm the above issue?
 
  TIA,
  L.
 
 
  Aready fixed in links in testing. Just add a prepare function with:
sed -i /ac_cpp=/s/\$CPPFLAGS/\$CPPFLAGS -O2/ configure
 
 
  I see, thank you. Alternatively one could simply do CPPFLAGS+= -O2 in
  PKGBUILD...
 
  I'm still confused though: are we supposed to pass -On flags to cpp now
 (this
  is even mentioned against in the configure script)? Or is it still a
  gcc/glibc problem?
 

 The reason we do the sed is so -O2 is not passed with CPPFLAGS during
 the actual built.  This is just working around an autoconf limitation.


Could you please share more info about autoconf limitation? It is not clear
for me why autoconf does not pass -O2. And why it is needed when
-D_FORTIFY_SOURCE=2 is enabled. Is it a bug that reported to autoconf
project? Or maybe it is some fundamental issue that Arch packages will live
forever?


Re: [arch-general] Current CPPFLAGS=-D_FORTIFY_SOURCE=2 break some builds

2013-05-07 Thread Anatol Pomozov
Hi


On Tue, May 7, 2013 at 3:43 PM, Allan McRae al...@archlinux.org wrote:

 On 08/05/13 08:10, Anatol Pomozov wrote:
  Hi, Allan
 
 
  On Mon, May 6, 2013 at 3:34 PM, Allan McRae al...@archlinux.org wrote:
 
  On 07/05/13 06:20, Leonid Isaev wrote:
  On Mon, 6 May 2013 16:01:30 -0400
  Eric Bélanger snowmanisc...@gmail.com wrote:
 
  On Mon, May 6, 2013 at 3:45 PM, Leonid Isaev lis...@umail.iu.edu
  wrote:
 
  Hi,
 
  With gcc 4.8.0-4 I can no longer build core/links package
 from
  ABS,
  with SSL support. The issue is _not_related to makepkg (as I
 originally
  thought), even plain ./configure fails if I export
  CPPFLAGS=-D_FORTIFY_SOURCE=2, regardless of the content of
  {C,CXX,LD}FLAGS.
  Here is the error:
  
  $ ./configure --with-ssl
  [ ... ]
  checking for openssl... yes
  checking OPENSSL_CFLAGS...
  checking OPENSSL_LIBS... -lssl -lcrypto
  checking for OpenSSL... no
  checking for OpenSSL... no
  configure: error: OpenSSL not found
  $ cat config.log
  [ ... ]
  configure:8095: checking for openssl
  configure:8102: checking OPENSSL_CFLAGS
  configure:8107: checking OPENSSL_LIBS
  configure:8139: checking for OpenSSL
  configure:8150: gcc -o conftest -g -O2 -D_FORTIFY_SOURCE=2
 conftest.c
  -lssl
  -lcrypto  -lm  15
  In file included from configure:8143:0:
  confdefs.h:8:16: error: duplicate 'unsigned'
   #define size_t unsigned
  ^
  configure: failed program was:
  #line 8143 configure
  #include confdefs.h
  #include openssl/ssl.h
  int main() {
  SSLeay_add_ssl_algorithms()
  ; return 0; }
  
 
  With gcc 4.7.2 all builds fine with Arch's default makepkg.conf, i.e.
  no
  duplicate unsigned error. Also, unsetting CPPFLAGS allows a
  successfull
  build.
 
  Since core/links has been successfully rebuilt, what was the gcc
  version?
  ANd
  can anyone else confirm the above issue?
 
  TIA,
  L.
 
 
  Aready fixed in links in testing. Just add a prepare function with:
sed -i /ac_cpp=/s/\$CPPFLAGS/\$CPPFLAGS -O2/ configure
 
 
  I see, thank you. Alternatively one could simply do CPPFLAGS+= -O2 in
  PKGBUILD...
 
  I'm still confused though: are we supposed to pass -On flags to cpp now
  (this
  is even mentioned against in the configure script)? Or is it still a
  gcc/glibc problem?
 
 
  The reason we do the sed is so -O2 is not passed with CPPFLAGS during
  the actual built.  This is just working around an autoconf limitation.
 
 
  Could you please share more info about autoconf limitation? It is not
 clear
  for me why autoconf does not pass -O2. And why it is needed when
  -D_FORTIFY_SOURCE=2 is enabled. Is it a bug that reported to autoconf
  project? Or maybe it is some fundamental issue that Arch packages will
 live
  forever?
 

 In short, autoconf is making broken assumptions about warnings given of
 by gcc.  Autoconf checks for headers by looking for a warning from gcc
 when it is missing - but not a specific warning, any warning...
 -D_FORTIFY_SOURCE=2 gives a warning when is it not used with
 optimization so the header check fails incorrectly.   Autoconf should
 not pass -O2 with CPPFLAGS because it is not a preprocessor flag.


Was this issue reported upstream?


 Note that not all software that uses autoconf is affected.  Some do not
 pass CPPFLAGS when testing for headers.


Re: [arch-general] libxml2 out of date

2013-05-01 Thread Anatol Pomozov
Hi

On Wed, Apr 24, 2013 at 2:47 PM, Anatol Pomozov
anatol.pomo...@gmail.com wrote:
 Hi

 On Tue, Apr 23, 2013 at 5:49 AM, Hussam Al-Tayeb hus...@visp.net.lb wrote:
 On Tuesday 23 April 2013 09:05:33 Ross Lagerwall wrote:
 Hi,

 Is there a reason (other than lack of time) that libxml2 has not been
 updated from 2.8 to 2.9 (now 2.9.1)?

 Regards

 As far as I can tell, it breaks a lot of applications.

 I believe the next question is Hussam, Jan is there anything we can
 do to make libxml2 upgrade happen?

Let me put this statement other way: I have some time that I would
like to contribute it back to Linux Arch. Trying to resolve libxml2
upgrade issues seems a useful task. Could anyone provide me more
information about libxml upgrade problems? Jan, you are the maintainer
should know it. If it brakes apps then could you give me the list?
I'll try to look at it.


Re: [arch-general] libxml2 out of date

2013-04-25 Thread Anatol Pomozov
Hi

On Tue, Apr 23, 2013 at 5:49 AM, Hussam Al-Tayeb hus...@visp.net.lb wrote:
 On Tuesday 23 April 2013 09:05:33 Ross Lagerwall wrote:
 Hi,

 Is there a reason (other than lack of time) that libxml2 has not been
 updated from 2.8 to 2.9 (now 2.9.1)?

 Regards

 As far as I can tell, it breaks a lot of applications.

I believe the next question is Hussam, Jan is there anything we can
do to make libxml2 upgrade happen?


Re: [arch-general] libxml2 out of date

2013-04-23 Thread Anatol Pomozov
Hi

On Tue, Apr 23, 2013 at 5:49 AM, Hussam Al-Tayeb hus...@visp.net.lb wrote:
 On Tuesday 23 April 2013 09:05:33 Ross Lagerwall wrote:
 Hi,

 Is there a reason (other than lack of time) that libxml2 has not been
 updated from 2.8 to 2.9 (now 2.9.1)?

 Regards

 As far as I can tell, it breaks a lot of applications.

Do you have list of applications that are broken? Could it be fixed
by recompiling those applications against 2.9.X?

I see that upcoming ubuntu is using 2.9.0
http://packages.ubuntu.com/search?suite=raringkeywords=libxml2 and I
assume their applications work fine


Re: [arch-general] [arch-dev-public] Build issues due to -D_FORTIFY_SOURCE=2 in CPPFLAGS

2013-04-10 Thread Anatol Pomozov
Hi

On Wed, Apr 10, 2013 at 11:07 AM, Joakim Hernberg j...@alchemy.lu wrote:
 On Mon, 08 Apr 2013 16:54:54 +1000
 Allan McRae al...@archlinux.org wrote:

 Hi,

 With pacman-4.1 we introduced CPPFLAGS in makepkg.conf and moved the
 -D_FORTIFY_SOURCE=2 flag out of C{,XX}FLAGS to there (where it should
 be).

 This of course will break wine's builtin workaround for FORTIFY_SOURCE,
 which assumed that it was in CFLAGS.  Maybe no matter as for some
 reason it wasn't working on archlinux in any case.

 It produces the following (snipped from the gcc output)
 -D_FORTIFY_SOURCE=2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=0, which for
 some reason seems to produce broken code.

This also breaks 'crash' AUR package. I workaround' the issue by
'unset CPPFLAGS' before building the package

https://aur.archlinux.org/packages/crash/
https://gist.github.com/anatol/5326430


Re: [arch-general] UEFI madness

2013-03-08 Thread Anatol Pomozov
Hi

On Fri, Mar 1, 2013 at 10:44 PM, David Benfell
benf...@parts-unknown.org wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi all,

 So far, my attempt to install Arch Linux on a UEFI system is a total
 facepalm moment. The problem is in booting post-install.

 So, first, does anyone have actual--and successful--experience
 installing Arch on a UEFI system? Yes, I went to the Arch Wiki, which
 initially pointed me at GummiBoot. There are actually two sets of
 instructions, one given where I looked first, for the UEFI entry, and
 another under the entry for GummiBoot. Neither succeeds, but I wound up
 following the latter set of instructions (and cleaning up extra entries
 with efibootmgr, which fortunately makes this relatively easy).

 GummiBoot says it can't find /vmlinuz-linux. I tried modifying the
 configuration to say /boot/vmlinuz-linux, but no joy. Apparently, I'm
 really supposed to copy this file and the initrd image to the EFI
 partition, but nobody says where in the EFI partition, so I have no idea.

 I also tried following the instructions for grub-efi. I'm just
 mystified. I managed to install the right package, but from there I just
 wasn't understanding a thing. I've been using linux since 1999 so this
 shouldn't be so completely mystifying.

 I tried installing rEFInd (from sourceforge). As near as I can tell, it
 does indeed detect all the possible boot options on the system. But when
 I try booting the Arch installation, it says it can't find the root
 partition. It also detects the GummiBoot option, but that leads the same
 place as before. Finally, it detects the Windows option, which I hope
 still works (unfortunately I do need this).

 I guess getting something that just works--like it did with BIOS
 systems--is not in the cards. What do I do now?

I installed Arch on my new home server several days ago and I had
similar experience with UEFI that you had.

I decided to go with gummiboot as it sounds simpler, and installation
instructions [1] are cleaner than for other UEFI bootloaders.
efibootmgr did not work for me so I started from copy it to the
'default' location $esp/EFI/boot/bootx64.efi for x86_64 systems. (see
[1]). I was hoping that my ASRock H61M/U3S3 will recognize gummiboot
bootloader. But no. It shown me cryptic error something No device
found.

I reformatted UEFI system partition, upgraded motherboard UEFI, but
nothing helped. So I renamed gummiboot.efi to $ESP/shellx64.efi and
then booted default shell from matherboard UEFI graphical UI. Only
then I was able to run efibootmgr and set path to gummiboot binaries.

Resume: my motherboard does not recognize  $esp/EFI/boot/bootx64.efi
as a default shell.

I mounted $ESP to /boot, copied kernel there, setup systemd hook for
gummiboot binaries and everything works like charm now. Additionally I
use multiple-devises btrfs [2] for root fs so I had to add btrfs
hook to mkinitcpio.conf.

[1] https://wiki.archlinux.org/index.php/Gummiboot
[2] https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices

PS: UEFI works great with Arch, but some Arch UEFI documentation
cleanup is required.


Re: [arch-general] Truecrypt mounting vanishes desktop icons

2013-01-14 Thread Anatol Pomozov
Hi

On Sun, Jan 13, 2013 at 2:58 AM, Greg . greg_z0...@hotmail.com wrote:


 When i use
 truecrypt --mount /PATH/OF/FILE /home/usr/Desktop
 all of my folders vanish. I then tried to unmount using
 truecrypt -d /PATH/OF/FILE and nothing, but after reboot the folders are
 back on Desktop.


What you did here you mounted truecrypt folder (that uses fuse technology)
on top of your Desktop. Desktop files are not disappeared but they are
hidden by truecrypt folder you just mounted.

To restore it back you need to unmount the Desktop directory. Something like

$ fusermount -u /home/usr/Desktop


 Is there any way to be able to mount on Desktop but not lose my folders?


Most likely you do not want to mount truecrypt (or other fuse filesystem)
to your Desktop. I would suggest do it to other folder, e.g.
~/Desktop/secret

$ mkdir ~/Desktop/secret
$ truecrypt --mount /PATH/OF/FILE ~/Desktop/secret


Re: [arch-general] Fuse and out-of-date packages in general

2012-11-11 Thread Anatol Pomozov
+ronald

Hi

On Sat, Nov 10, 2012 at 10:22 AM, Lukas Jirkovsky l.jirkov...@gmail.com wrote:
 Actually I have more general question. There are many out-of-date
 packages that are not updated for a long time. Should other
 (non-package owners) take care of it?

 I can't tell why fuse is not updated, but often there is a reason for
 package being outdated for a longer period of time. For example, the
 rawtherapee package form [community], which is now outdated probably
 for a few months, was never updated because the current version has a
 bug that makes it crash on almost any action (the new version with a
 fix should be released soon though).

 If you want to help, I guess you could update a PKGBUILD, test it and
 send the PKGBUILD directly to the package maintainer. At least you
 should get a reply why the package is not updated.

Here the updated PKGBUILD file. And its diff:

7c7
 pkgver=2.9.1
---
 pkgver=2.9.2
19c19
 sha1sums=('ed9e8ab740576a038856109cc587d52a387b223f'
---
 sha1sums=('fcfc005b1edcd5e8b325729a1b325ff0e8b2a5ab'
29c29
   --enable-util --bindir=/bin
---
   --enable-util --disable-example --bindir=/bin

I updated the package version and added '--disable-example' configure
option. The second change is optional but makes build a little bit
faster as we do not need examples for Arch package.


wrt 2.9.2 version stability - this is an incremental update that
recommended by the upstream developer. One of my current job duties is
to support fuse filesystems at the company's production servers (this
includes also backporting kernel bug fixes). We use 2.9.2 at our 24x7
servers for a while and we do not see any issues with it. I think this
version is stable enough.


PKGBUILD
Description: Binary data


[arch-general] Fuse and out-of-date packages in general

2012-11-10 Thread Anatol Pomozov
Hi,

fuse 2.9.2 was released a while ago. This release contains important
bug-fix for deadlock and there are arch users that wait for this
version https://bbs.archlinux.org/viewtopic.php?id=146157

Some time ago I marked the package as out-of-date
https://www.archlinux.org/packages/extra/i686/fuse/ But the package
maintainer seems busy and ignores this package. Is there any way to
force upgrading fuse?

Actually I have more general question. There are many out-of-date
packages that are not updated for a long time. Should other
(non-package owners) take care of it? Is there any way for non Arch
developers participate in package maintenance? I keep in mind
something like MacPorts or Homebrew [1] where anyone can easily send a
package update that can be reviewed and submitted to repo by
developers. I found this is a great way to involve more people into
project, for example I would be happy to send an update to fuse
package. But I cannot find such instructions for Arch contributors.

[1] https://github.com/mxcl/homebrew/pulls