[Gretl-users] Re: Gretl-Hansl "IDE" for Sublime editor

2023-12-28 Thread stas
I like to use Codelobster editor - https://codelobster.com
___
Gretl-users mailing list -- gretl-users@gretlml.univpm.it
To unsubscribe send an email to gretl-users-le...@gretlml.univpm.it
Website: 
https://gretlml.univpm.it/postorius/lists/gretl-users.gretlml.univpm.it/


[yakuake] [Bug 435544] Application focus issue

2023-08-27 Thread Stas Egorov
https://bugs.kde.org/show_bug.cgi?id=435544

--- Comment #13 from Stas Egorov  ---
It looks like the behavior depends on the window manager.

On Xfwm and KWin this bug is reproducible.
Not reproduce on Openbox.

That being said, I'm using lxqt as my desktop environment.

-- 
You are receiving this mail because:
You are watching all bug changes.

[Nut-upsdev] Liebert PSA 1500 (500, 1000, 650, ...)

2023-07-20 Thread Stas via Nut-upsdev

Hello

The page https://networkupstools.org/ddl/Liebert/PSA_1500.html don't 
contain all information about UPS series PSA.


First, the variable "*device.serial*" is dummy, it is always empty , and 
"*ups.serial*" also always empty.


Second, configuration file for the all UPS "PSA" contain these lines:

[PSA]
    driver = "usbhid-ups"
    port = "auto"
    vendorid = "10AF"
    productid = "0001"
    product = "LiebertPSA"

This is my /etc/nut/ups.conf for the Vertiv (ex Liebert) PSA500MT3-230U


--
Станислав Дёгтев
Служба "Ваш админ"
 Мои контакты:
 - email:stas.grumb...@gmail.com  иs...@vashadmin.su
 - телефоны в Е-бурге  +79222112259 (+telegram), +79505571146, +79193628944
___
Nut-upsdev mailing list
Nut-upsdev@alioth-lists.debian.net
https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/nut-upsdev


[PHP-WEBMASTER] Sec Bug->Bug #81523 [Opn]: The search bar in your site no contains atributte "maxlenght"

2023-05-24 Thread stas
Edit report at https://bugs.php.net/bug.php?id=81523=1

 ID: 81523
 Updated by: s...@php.net
 Reported by:neibase123 at gmail dot com
 Summary:The search bar in your site no contains atributte
 "maxlenght"
 Status: Open
-Type:   Security
+Type:   Bug
 Package:Website problem
 Operating System:   irrelevante
 PHP Version:Irrelevant
 Block user comment: N
 Private report: N



Previous Comments:

[2023-05-24 06:32:09] tradingstatsf at gmail dot com

My Best Home Designs are sharing latest news about home design, home 
decoration, ,realestate etc. More info to 
visit:(https://mybesthomedesigns.com)github.com


[2021-10-14 10:06:04] c...@php.net

The missing maxlength attribute is certainly not a security issue,
since a client can ignore that.  Not restricting the length
server-side, however, might be an issue in this case.


[2021-10-13 17:06:11] neibase123 at gmail dot com

Description:

Your site's search bar doesn't contain the "maxlength" html attribute, I enter 
an absurd amount of characters, if your server doesn't filter these characters, 
they can cause a Denial Of Service attack 

Test script:
---
#this script works on any page on the site that contains the search bar.
# please in console navigator paste lines one for one 
# tested in https://www.php.net/



document.getElementsByName("pattern")[0].value = "A".repeat(1000)

document.getElementsByName("pattern")[0].value;

Expected result:

Demonstrate how it can set a huge value in the search bar, if the attacker 
enters and your server doesn't filter these characters, they can cause a DOS 
attack







--
Edit this bug report at https://bugs.php.net/bug.php?id=81523=1

-- 
PHP Webmaster List Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [go-nuts] Automation with the ultimate guide to GO language

2023-01-11 Thread Stas Maksimov
Hi Ashwin,

What is network automation exactly? You seem to have mentioned it eight
times in the book description without going into detail.

Is it possible to see the table of contents?

And of course you can charge what you like for your book, but personally 25
bucks for a 100-page book is quite steep.

Kind regards,
Stas

On Wed 11 Jan 2023 at 20:27, ashwin shetty  wrote:

> Unlock the full potential of automation with the ultimate guide to GO
> language. Discover the power of GO's efficient and streamlined syntax while
> mastering key techniques for automating repetitive tasks and optimizing
> performance. Whether you're a seasoned developer or new to programming,
> this book is an essential resource for mastering GO and driving your
> automation projects to success.
>
> https://www.amazon.com/dp/B0BRDG5Y4P
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/golang-nuts/c82c36dd-130e-4c77-8c4a-174b4741b9dbn%40googlegroups.com
> <https://groups.google.com/d/msgid/golang-nuts/c82c36dd-130e-4c77-8c4a-174b4741b9dbn%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CAAZYd_n3VxVO3hwChHKBDEOUG_iJmvoMw6NHrJPt8tfDM7J0qQ%40mail.gmail.com.


[Ubuntu-x-swat] [Bug 2000476] [NEW] Old intel/media-driver in Ubuntu 22.10 for Intel Alder Lake

2022-12-26 Thread Stas Arieshyn
Public bug reported:

I have Intel alder lake 12th Gen Intel(R) Core(TM) i9-12900HX in my laptop. 
hardware video decoding didn't work on it because of old packages. So I had to 
compile and install manually from git repos these packages:
- libva 2.16.0 (and utils) (latest at the moment)
  https://github.com/intel/libva.git
  https://github.com/intel/libva-utils.git
- intel gmmlib 22.3.2 (latest at the moment)
  https://github.com/intel/gmmlib.git
- media-driver 22.6.4 (latest at the moment)
  https://github.com/intel/media-driver.git


Before changes:
```
$ vainfo
libva info: VA-API version 1.15.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_14
libva error: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so init failed
libva info: va_openDriver() returns 1
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva error: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit
```

After installing new versions:
```
$ vainfo
Trying display: wayland
Trying display: x11
libva info: VA-API version 1.17.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_16
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.17 (libva 2.16.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 22.6.4 
(aca8ee098)
vainfo: Supported profile and entrypoints
  VAProfileNone   : VAEntrypointVideoProc
  VAProfileNone   : VAEntrypointStats
  VAProfileMPEG2Simple: VAEntrypointVLD
  VAProfileMPEG2Simple: VAEntrypointEncSlice
  VAProfileMPEG2Main  : VAEntrypointVLD
  VAProfileMPEG2Main  : VAEntrypointEncSlice
  VAProfileH264Main   : VAEntrypointVLD
  VAProfileH264Main   : VAEntrypointEncSlice
  VAProfileH264Main   : VAEntrypointFEI
  VAProfileH264Main   : VAEntrypointEncSliceLP
  VAProfileH264High   : VAEntrypointVLD
  VAProfileH264High   : VAEntrypointEncSlice
  VAProfileH264High   : VAEntrypointFEI
  VAProfileH264High   : VAEntrypointEncSliceLP
  VAProfileVC1Simple  : VAEntrypointVLD
  VAProfileVC1Main: VAEntrypointVLD
  VAProfileVC1Advanced: VAEntrypointVLD
  VAProfileJPEGBaseline   : VAEntrypointVLD
  VAProfileJPEGBaseline   : VAEntrypointEncPicture
  VAProfileH264ConstrainedBaseline: VAEntrypointVLD
  VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
  VAProfileH264ConstrainedBaseline: VAEntrypointFEI
  VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
  VAProfileHEVCMain   : VAEntrypointVLD
  VAProfileHEVCMain   : VAEntrypointEncSlice
  VAProfileHEVCMain   : VAEntrypointFEI
  VAProfileHEVCMain   : VAEntrypointEncSliceLP
  VAProfileHEVCMain10 : VAEntrypointVLD
  VAProfileHEVCMain10 : VAEntrypointEncSlice
  VAProfileHEVCMain10 : VAEntrypointEncSliceLP
  VAProfileVP9Profile0: VAEntrypointVLD
  VAProfileVP9Profile0: VAEntrypointEncSliceLP
  VAProfileVP9Profile1: VAEntrypointVLD
  VAProfileVP9Profile1: VAEntrypointEncSliceLP
  VAProfileVP9Profile2: VAEntrypointVLD
  VAProfileVP9Profile2: VAEntrypointEncSliceLP
  VAProfileVP9Profile3: VAEntrypointVLD
  VAProfileVP9Profile3: VAEntrypointEncSliceLP
  VAProfileHEVCMain12 : VAEntrypointVLD
  VAProfileHEVCMain12 : VAEntrypointEncSlice
  VAProfileHEVCMain422_10 : VAEntrypointVLD
  VAProfileHEVCMain422_10 : VAEntrypointEncSlice
  VAProfileHEVCMain422_12 : VAEntrypointVLD
  VAProfileHEVCMain422_12 : VAEntrypointEncSlice
  VAProfileHEVCMain444: VAEntrypointVLD
  VAProfileHEVCMain444: VAEntrypointEncSliceLP
  VAProfileHEVCMain444_10 : VAEntrypointVLD
  VAProfileHEVCMain444_10 : VAEntrypointEncSliceLP
  VAProfileHEVCMain444_12 : VAEntrypointVLD
  VAProfileHEVCSccMain: VAEntrypointVLD
  VAProfileHEVCSccMain: VAEntrypointEncSliceLP
  VAProfileHEVCSccMain10  : VAEntrypointVLD
  VAProfileHEVCSccMain10  : VAEntrypointEncSliceLP
  VAProfileHEVCSccMain444 : VAEntrypointVLD
  VAProfileHEVCSccMain444 : VAEntrypointEncSliceLP
  VAProfileAV1Profile0: VAEntrypointVLD
  VAProfileHEVCSccMain444_10  : VAEntrypointVLD
  

Re: [OpenSIPS-Users] dialplan out_var

2022-08-30 Thread Stas Kobzar
Hi,
$fn is from name and name maybe quoted in the header value.
Try to use $fU to have username part (actual number)

On Tue, Aug 30, 2022 at 9:00 AM Bogdan-Andrei Iancu 
wrote:

> Again,
>
> your DP rule is performing NO change over the input. The whole input, as
> received, it provided as output. And the quotes you see in the output value
> are part of the input value.
>
> Regards,
>
> Bogdan-Andrei Iancu
>
> OpenSIPS Founder and Developer
>   https://www.opensips-solutions.com
> OpenSIPS Summit 27-30 Sept 2022, Athens
>   https://www.opensips.org/events/Summit-2022Athens/
>
> On 8/30/22 10:18 AM, Антон Ершов wrote:
>
> that's the point. there are no conversions, but quotes appear
>
> вт, 30 авг. 2022 г. в 09:26, Bogdan-Andrei Iancu :
>
>> Your DP rule is doing nothing, as transformation - it is matching
>> everything and returning it as output..so not sure what are your
>> expectations here.
>>
>> Regards,
>>
>> Bogdan-Andrei Iancu
>>
>> OpenSIPS Founder and Developer
>>   https://www.opensips-solutions.com
>> OpenSIPS Summit 27-30 Sept 2022, Athens
>>   https://www.opensips.org/events/Summit-2022Athens/
>>
>> On 8/29/22 4:59 PM, Антон Ершов wrote:
>>
>> maybe it is so.
>> but where can you go wrong with this simple rule
>>
>> "id" "dpid" "pr" "match_op" "match_exp" "match_flags" "subst_exp"
>> "repl_exp" "timerec" "disabled" "attrs"
>> 1 0 0 1 ".*" 0 "^(.*)$" "\1" 0 "test"
>>
>> /usr/sbin/opensips[30317]: DBG:dialplan:dp_translate_f: dpid is 0
>> partition is default
>> /usr/sbin/opensips[30317]: DBG:dialplan:dp_translate_f: input is
>> "00139939484"
>> /usr/sbin/opensips[30317]: DBG:dialplan:dp_translate_f: checking with
>> dpid 0
>> /usr/sbin/opensips[30317]: DBG:dialplan:test_match: test_match:[0]
>> "00139939484"
>> /usr/sbin/opensips[30317]: DBG:dialplan:translate: Regex operator
>> testing. Got result: 0
>> /usr/sbin/opensips[30317]: DBG:dialplan:translate: Found a matching rule
>> 0x7f00fee33698: pr 0, match_exp .*
>> /usr/sbin/opensips[30317]: DBG:dialplan:translate: the rule's attrs are
>> test
>> /usr/sbin/opensips[30317]: DBG:dialplan:translate: the copied attributes
>> are: test
>> /usr/sbin/opensips[30317]: DBG:dialplan:test_match: test_match:[0]
>> "00139939484"
>> /usr/sbin/opensips[30317]: DBG:dialplan:test_match: test_match:[1]
>> "00139939484"
>> /usr/sbin/opensips[30317]: DBG:dialplan:dp_translate_f: input
>> "00139939484" with dpid 0 => output "00139939484"
>>
>> пн, 29 авг. 2022 г. в 16:43, Bogdan-Andrei Iancu :
>>
>>> Hi,
>>>
>>> No quotes are added by the dialplan module at all. I think out value
>>> inherited the quotes from the input value, the From Display Name, which may
>>> be a quoted value.
>>>
>>> Regards,
>>>
>>> Bogdan-Andrei Iancu
>>>
>>> OpenSIPS Founder and Developer
>>>   https://www.opensips-solutions.com
>>> OpenSIPS Summit 27-30 Sept 2022, Athens
>>>   https://www.opensips.org/events/Summit-2022Athens/
>>>
>>> On 8/29/22 3:35 PM, Антон Ершов wrote:
>>>
>>> Hello friends!
>>>
>>> In version: opensips 3.2.8 (x86_64/linux)
>>> I observe strange behavior of the dialplan module. the value returned to
>>> the $var(out) variable is wrapped in quotes. In version 3.2.5 no such
>>> behavior was observed. this forces you to do additional work with the
>>> result obtained in order to use it further.
>>>
>>> my config
>>> if (dp_translate(0, $fn, $var(dp_out), $var(dp_attrs))) {
>>>   xlog("L_INFO", "$ci translated to var $var(dp_out) with
>>> attributes: '$var(dp_attrs)'\n");
>>>   ...
>>> }
>>>
>>> show in console
>>>
>>> /usr/sbin/opensips[30318]: 287b5bea-26c4-11ed-abcd-016f252b0962
>>> translated to var "12345" with attributes: 'test'
>>>
>>> as you can see the value is wrapped in quotes
>>> if you try to use the value of the variable in some other place, for
>>> example in uac_replace. quotation marks are also present
>>>
>>> ___
>>> Users mailing 
>>> listUsers@lists.opensips.orghttp://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>>
>>>
>>>
>>
> ___
> Users mailing list
> Users@lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>
___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Re: bug in (log)

2022-07-03 Thread Stas Boukarev
This is permitted by
http://www.lispworks.com/documentation/HyperSpec/Body/12_acc.htm

On Sun, Jul 3, 2022 at 12:10 AM James Cloos  wrote:
>
> (gitlab is uusable; i have to report here.)
>
> ecl 21.2.1 gives:
>
> > (log 1/6319748715279270675921934218987893281199411530039296)
>
> Debugger received error of type: DIVISION-BY-ZERO
> #
> Error flushed.
>
> whereas other cl's (i testyed sbcl and ccl) give results like:
>
> ? (log 1/6319748715279270675921934218987893281199411530039296)
> -119.27552
>
> I tested on amd64 (gentoo) and arm64 (debian and netbsd) with identical
> results.  i did not have a musl box or other bsd to test on.
>
> run with --no-trap-fpe, the result is #.
>
> another example is:
>
> (truncate (log 1/6319748715279270675921934218987893281199418867))
>
> Debugger received error of type: ARITHMETIC-ERROR
> #
>
> whereas this works:
>
> (truncate (log 1/631974871527927067592193421898789328119941867))
>
> -103
> -0.27893066
>
> which of course suggests that the issue is c's long double's precision.
>
> it looks like ecl could use an mpq log function;
> https://github.com/linas/anant might work.
>
> -JimC
> --
> James Cloos  OpenPGP: 0x997A9F17ED7DAEA6
>



Re: [PATCH v4] c-format: Add -Wformat-int-precision option [PR80060]

2022-01-07 Thread Daniil Stas via Gcc-patches
On Tue, 21 Dec 2021 00:43:24 +0200
Daniil Stas  wrote:

> On Sat, 27 Nov 2021 22:18:23 +
> Daniil Stas  wrote:
> 
> > This option is enabled by default when -Wformat option is enabled. A
> > user can specify -Wno-format-int-precision to disable emitting
> > warnings when passing an argument of an incompatible integer type to
> > a 'd', 'i', 'b', 'B', 'o', 'u', 'x', or 'X' conversion specifier
> > when it has the same precision as the expected type.
> > 
> > Signed-off-by: Daniil Stas 
> > 
> > gcc/c-family/ChangeLog:
> > 
> > * c-format.c (check_format_types): Don't emit warnings when
> > passing an argument of an incompatible integer type to
> > a 'd', 'i', 'b', 'B', 'o', 'u', 'x', or 'X' conversion
> > specifier when it has the same precision as the expected
> > type if -Wno-format-int-precision option is specified.
> > * c.opt: Add -Wformat-int-precision option.
> > 
> > gcc/ChangeLog:
> > 
> > * doc/invoke.texi: Add -Wformat-int-precision option
> > description.
> > 
> > gcc/testsuite/ChangeLog:
> > 
> > * c-c++-common/Wformat-int-precision-1.c: New test.
> > * c-c++-common/Wformat-int-precision-2.c: New test.
> > ---
> > Changes for v4:
> >   - Added 'b' and 'B' format specifiers to the option descriptions.
> > 
> > Changes for v3:
> >   - Added additional @code{} derictives to the documentation where
> > needed.
> >   - Changed tests to run on "! long_neq_int" target instead of
> > "lp64".
> >   - Added a test case to check that gcc still emits warnings for
> > arguments with different precision even with
> > -Wno-format-int-precision option enabled.
> > 
> > Changes for v2:
> >   - Changed the option name to -Wformat-int-precision.
> >   - Changed the option description as was suggested by Martin.
> >   - Changed Wformat-int-precision-2.c to use dg-bogus instead of
> > previous invalid syntax.
> > 
> >  gcc/c-family/c-format.c |  2 +-
> >  gcc/c-family/c.opt  |  6 ++
> >  gcc/doc/invoke.texi | 17
> > - .../c-c++-common/Wformat-int-precision-1.c  |
> > 7 +++ .../c-c++-common/Wformat-int-precision-2.c  |  8
> >  5 files changed, 38 insertions(+), 2 deletions(-)
> >  create mode 100644
> > gcc/testsuite/c-c++-common/Wformat-int-precision-1.c create mode
> > 100644 gcc/testsuite/c-c++-common/Wformat-int-precision-2.c
> > 
> > diff --git a/gcc/c-family/c-format.c b/gcc/c-family/c-format.c
> > index e735e092043..c66787f931f 100644
> > --- a/gcc/c-family/c-format.c
> > +++ b/gcc/c-family/c-format.c
> > @@ -4248,7 +4248,7 @@ check_format_types (const substring_loc
> > _loc, && (!pedantic || i < 2)
> >   && char_type_flag)
> > continue;
> > -  if (types->scalar_identity_flag
> > +  if ((types->scalar_identity_flag ||
> > !warn_format_int_precision) && (TREE_CODE (cur_type) == TREE_CODE
> > (wanted_type) || (INTEGRAL_TYPE_P (cur_type)
> >   && INTEGRAL_TYPE_P (wanted_type)))
> > diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
> > index 4b8a094b206..d7d952765c6 100644
> > --- a/gcc/c-family/c.opt
> > +++ b/gcc/c-family/c.opt
> > @@ -684,6 +684,12 @@ C ObjC C++ LTO ObjC++ Warning
> > Alias(Wformat-overflow=, 1, 0) IntegerRange(0, 2) Warn about
> > function calls with format strings that write past the end of the
> > destination region.  Same as -Wformat-overflow=1. 
> > +Wformat-int-precision
> > +C ObjC C++ ObjC++ Var(warn_format_int_precision) Warning
> > LangEnabledBy(C ObjC C++ ObjC++,Wformat=,warn_format >= 1, 0) +Warn
> > when passing an argument of an incompatible integer type to a 'd',
> > 'i', +'b', 'B', 'o', 'u', 'x', or 'X' conversion specifier even when
> > it has the same +precision as the expected type. +
> >  Wformat-security
> >  C ObjC C++ ObjC++ Var(warn_format_security) Warning LangEnabledBy(C
> > ObjC C++ ObjC++,Wformat=, warn_format >= 2, 0) Warn about possible
> > security problems with format functions. diff --git
> > a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index
> > 3bddfbaae6a..94a7ad96c50 100644 --- a/gcc/doc/invoke.texi
> > +++ b/gcc/doc/invoke.texi
> > @@ -351,7 +351,7 @@ Objective-C and Objective-C++ Dialects}.
> >  -Werror  -Werror=*  -Wexpansion-to-defined  -Wfatal-errors @gol
> >  -Wfloat-conversion  -Wfloat-equal  -Wformat  -Wformat=2 @gol
> >  -Wno-form

Re: [PATCH v4] c-format: Add -Wformat-int-precision option [PR80060]

2021-12-20 Thread Daniil Stas via Gcc-patches
On Sat, 27 Nov 2021 22:18:23 +
Daniil Stas  wrote:

> This option is enabled by default when -Wformat option is enabled. A
> user can specify -Wno-format-int-precision to disable emitting
> warnings when passing an argument of an incompatible integer type to
> a 'd', 'i', 'b', 'B', 'o', 'u', 'x', or 'X' conversion specifier when
> it has the same precision as the expected type.
> 
> Signed-off-by: Daniil Stas 
> 
> gcc/c-family/ChangeLog:
> 
>   * c-format.c (check_format_types): Don't emit warnings when
>   passing an argument of an incompatible integer type to
>   a 'd', 'i', 'b', 'B', 'o', 'u', 'x', or 'X' conversion
>   specifier when it has the same precision as the expected type
>   if -Wno-format-int-precision option is specified.
>   * c.opt: Add -Wformat-int-precision option.
> 
> gcc/ChangeLog:
> 
>   * doc/invoke.texi: Add -Wformat-int-precision option
> description.
> 
> gcc/testsuite/ChangeLog:
> 
>   * c-c++-common/Wformat-int-precision-1.c: New test.
>   * c-c++-common/Wformat-int-precision-2.c: New test.
> ---
> Changes for v4:
>   - Added 'b' and 'B' format specifiers to the option descriptions.
> 
> Changes for v3:
>   - Added additional @code{} derictives to the documentation where
> needed.
>   - Changed tests to run on "! long_neq_int" target instead of "lp64".
>   - Added a test case to check that gcc still emits warnings for
> arguments with different precision even with
> -Wno-format-int-precision option enabled.
> 
> Changes for v2:
>   - Changed the option name to -Wformat-int-precision.
>   - Changed the option description as was suggested by Martin.
>   - Changed Wformat-int-precision-2.c to use dg-bogus instead of
> previous invalid syntax.
> 
>  gcc/c-family/c-format.c |  2 +-
>  gcc/c-family/c.opt  |  6 ++
>  gcc/doc/invoke.texi | 17
> - .../c-c++-common/Wformat-int-precision-1.c  |
> 7 +++ .../c-c++-common/Wformat-int-precision-2.c  |  8
>  5 files changed, 38 insertions(+), 2 deletions(-)
>  create mode 100644
> gcc/testsuite/c-c++-common/Wformat-int-precision-1.c create mode
> 100644 gcc/testsuite/c-c++-common/Wformat-int-precision-2.c
> 
> diff --git a/gcc/c-family/c-format.c b/gcc/c-family/c-format.c
> index e735e092043..c66787f931f 100644
> --- a/gcc/c-family/c-format.c
> +++ b/gcc/c-family/c-format.c
> @@ -4248,7 +4248,7 @@ check_format_types (const substring_loc
> _loc, && (!pedantic || i < 2)
> && char_type_flag)
>   continue;
> -  if (types->scalar_identity_flag
> +  if ((types->scalar_identity_flag || !warn_format_int_precision)
> && (TREE_CODE (cur_type) == TREE_CODE (wanted_type)
> || (INTEGRAL_TYPE_P (cur_type)
> && INTEGRAL_TYPE_P (wanted_type)))
> diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
> index 4b8a094b206..d7d952765c6 100644
> --- a/gcc/c-family/c.opt
> +++ b/gcc/c-family/c.opt
> @@ -684,6 +684,12 @@ C ObjC C++ LTO ObjC++ Warning
> Alias(Wformat-overflow=, 1, 0) IntegerRange(0, 2) Warn about function
> calls with format strings that write past the end of the destination
> region.  Same as -Wformat-overflow=1. 
> +Wformat-int-precision
> +C ObjC C++ ObjC++ Var(warn_format_int_precision) Warning
> LangEnabledBy(C ObjC C++ ObjC++,Wformat=,warn_format >= 1, 0) +Warn
> when passing an argument of an incompatible integer type to a 'd',
> 'i', +'b', 'B', 'o', 'u', 'x', or 'X' conversion specifier even when
> it has the same +precision as the expected type. +
>  Wformat-security
>  C ObjC C++ ObjC++ Var(warn_format_security) Warning LangEnabledBy(C
> ObjC C++ ObjC++,Wformat=, warn_format >= 2, 0) Warn about possible
> security problems with format functions. diff --git
> a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index
> 3bddfbaae6a..94a7ad96c50 100644 --- a/gcc/doc/invoke.texi
> +++ b/gcc/doc/invoke.texi
> @@ -351,7 +351,7 @@ Objective-C and Objective-C++ Dialects}.
>  -Werror  -Werror=*  -Wexpansion-to-defined  -Wfatal-errors @gol
>  -Wfloat-conversion  -Wfloat-equal  -Wformat  -Wformat=2 @gol
>  -Wno-format-contains-nul  -Wno-format-extra-args  @gol
> --Wformat-nonliteral  -Wformat-overflow=@var{n} @gol
> +-Wformat-nonliteral  -Wformat-overflow=@var{n}
> -Wformat-int-precision @gol -Wformat-security  -Wformat-signedness
> -Wformat-truncation=@var{n} @gol -Wformat-y2k  -Wframe-address @gol
>  -Wframe-larger-than=@var{byte-size}  -Wno-free-nonheap-object @gol
> @@ -6122,6 +6122,21 @@ If @option{-Wformat} is specified, also warn
> if the format string is not a string literal a

Re: [PATCH v3] c-format: Add -Wformat-int-precision option [PR80060]

2021-11-27 Thread Daniil Stas via Gcc-patches
On Tue, 23 Nov 2021 22:48:24 +
Joseph Myers  wrote:

> On Tue, 23 Nov 2021, Daniil Stas via Gcc-patches wrote:
> 
> > On Mon, 22 Nov 2021 20:35:03 +
> > Joseph Myers  wrote:
> >   
> > > On Sun, 21 Nov 2021, Daniil Stas via Gcc-patches wrote:
> > >   
>  [...]  
> > > 
> > > I'd expect this to apply to 'b' and 'B' as well (affects commit
> > > message, ChangeLog entry, option help string, documentation).
> > >   
> > 
> > Hi Joseph,
> > 
> > I can't find any description of these specifiers anywhere. And
> > looks  
> 
> They're new specifiers in C23.  See the most recent working draft 
> <http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2731.pdf>.
> 

Ah, thank you.
I've sent an updated patch.


[PATCH v4] c-format: Add -Wformat-int-precision option [PR80060]

2021-11-27 Thread Daniil Stas via Gcc-patches
This option is enabled by default when -Wformat option is enabled. A
user can specify -Wno-format-int-precision to disable emitting
warnings when passing an argument of an incompatible integer type to
a 'd', 'i', 'b', 'B', 'o', 'u', 'x', or 'X' conversion specifier when
it has the same precision as the expected type.

Signed-off-by: Daniil Stas 

gcc/c-family/ChangeLog:

* c-format.c (check_format_types): Don't emit warnings when
passing an argument of an incompatible integer type to
a 'd', 'i', 'b', 'B', 'o', 'u', 'x', or 'X' conversion
specifier when it has the same precision as the expected type
if -Wno-format-int-precision option is specified.
* c.opt: Add -Wformat-int-precision option.

gcc/ChangeLog:

* doc/invoke.texi: Add -Wformat-int-precision option description.

gcc/testsuite/ChangeLog:

* c-c++-common/Wformat-int-precision-1.c: New test.
* c-c++-common/Wformat-int-precision-2.c: New test.
---
Changes for v4:
  - Added 'b' and 'B' format specifiers to the option descriptions.

Changes for v3:
  - Added additional @code{} derictives to the documentation where needed.
  - Changed tests to run on "! long_neq_int" target instead of "lp64".
  - Added a test case to check that gcc still emits warnings for arguments
  with different precision even with -Wno-format-int-precision option enabled.

Changes for v2:
  - Changed the option name to -Wformat-int-precision.
  - Changed the option description as was suggested by Martin.
  - Changed Wformat-int-precision-2.c to use dg-bogus instead of previous
  invalid syntax.

 gcc/c-family/c-format.c |  2 +-
 gcc/c-family/c.opt  |  6 ++
 gcc/doc/invoke.texi | 17 -
 .../c-c++-common/Wformat-int-precision-1.c  |  7 +++
 .../c-c++-common/Wformat-int-precision-2.c  |  8 
 5 files changed, 38 insertions(+), 2 deletions(-)
 create mode 100644 gcc/testsuite/c-c++-common/Wformat-int-precision-1.c
 create mode 100644 gcc/testsuite/c-c++-common/Wformat-int-precision-2.c

diff --git a/gcc/c-family/c-format.c b/gcc/c-family/c-format.c
index e735e092043..c66787f931f 100644
--- a/gcc/c-family/c-format.c
+++ b/gcc/c-family/c-format.c
@@ -4248,7 +4248,7 @@ check_format_types (const substring_loc _loc,
  && (!pedantic || i < 2)
  && char_type_flag)
continue;
-  if (types->scalar_identity_flag
+  if ((types->scalar_identity_flag || !warn_format_int_precision)
  && (TREE_CODE (cur_type) == TREE_CODE (wanted_type)
  || (INTEGRAL_TYPE_P (cur_type)
  && INTEGRAL_TYPE_P (wanted_type)))
diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
index 4b8a094b206..d7d952765c6 100644
--- a/gcc/c-family/c.opt
+++ b/gcc/c-family/c.opt
@@ -684,6 +684,12 @@ C ObjC C++ LTO ObjC++ Warning Alias(Wformat-overflow=, 1, 
0) IntegerRange(0, 2)
 Warn about function calls with format strings that write past the end
 of the destination region.  Same as -Wformat-overflow=1.
 
+Wformat-int-precision
+C ObjC C++ ObjC++ Var(warn_format_int_precision) Warning LangEnabledBy(C ObjC 
C++ ObjC++,Wformat=,warn_format >= 1, 0)
+Warn when passing an argument of an incompatible integer type to a 'd', 'i',
+'b', 'B', 'o', 'u', 'x', or 'X' conversion specifier even when it has the same
+precision as the expected type.
+
 Wformat-security
 C ObjC C++ ObjC++ Var(warn_format_security) Warning LangEnabledBy(C ObjC C++ 
ObjC++,Wformat=, warn_format >= 2, 0)
 Warn about possible security problems with format functions.
diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index 3bddfbaae6a..94a7ad96c50 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -351,7 +351,7 @@ Objective-C and Objective-C++ Dialects}.
 -Werror  -Werror=*  -Wexpansion-to-defined  -Wfatal-errors @gol
 -Wfloat-conversion  -Wfloat-equal  -Wformat  -Wformat=2 @gol
 -Wno-format-contains-nul  -Wno-format-extra-args  @gol
--Wformat-nonliteral  -Wformat-overflow=@var{n} @gol
+-Wformat-nonliteral  -Wformat-overflow=@var{n} -Wformat-int-precision @gol
 -Wformat-security  -Wformat-signedness  -Wformat-truncation=@var{n} @gol
 -Wformat-y2k  -Wframe-address @gol
 -Wframe-larger-than=@var{byte-size}  -Wno-free-nonheap-object @gol
@@ -6122,6 +6122,21 @@ If @option{-Wformat} is specified, also warn if the 
format string is not a
 string literal and so cannot be checked, unless the format function
 takes its format arguments as a @code{va_list}.
 
+@item -Wformat-int-precision
+@opindex Wformat-int-precision
+@opindex Wno-format-int-precision
+Warn when passing an argument of an incompatible integer type to
+a @samp{d}, @samp{i}, @samp{b}, @samp{B}, @samp{o}, @samp{u}, @samp{x},
+or @samp{X} conversion specifier even when it has the same precision as
+the expected type.  For example, on targets where @code{int64_t} is a typedef
+for 

Re: [PATCH v3] c-format: Add -Wformat-int-precision option [PR80060]

2021-11-23 Thread Daniil Stas via Gcc-patches
On Mon, 22 Nov 2021 20:35:03 +
Joseph Myers  wrote:

> On Sun, 21 Nov 2021, Daniil Stas via Gcc-patches wrote:
> 
> > This option is enabled by default when -Wformat option is enabled. A
> > user can specify -Wno-format-int-precision to disable emitting
> > warnings when passing an argument of an incompatible integer type to
> > a 'd', 'i', 'o', 'u', 'x', or 'X' conversion specifier when it has
> > the same precision as the expected type.  
> 
> I'd expect this to apply to 'b' and 'B' as well (affects commit
> message, ChangeLog entry, option help string, documentation).
> 

Hi Joseph,

I can't find any description of these specifiers anywhere. And looks
like gcc doesn't recognize them when I try to compile a sample program
with them (I just get %B printed when I run the program).
Do these specifiers actually exist? Can you point me to the
documentation?

Thanks


Re: [PATCH v2] c-format: Add -Wformat-int-precision option [PR80060]

2021-11-21 Thread Daniil Stas via Gcc-patches
On Thu, 4 Nov 2021 18:25:14 -0600
Martin Sebor  wrote:

> On 10/31/21 8:13 AM, Daniil Stas wrote:
> > On Sun, 10 Oct 2021 23:10:20 +
> > Daniil Stas  wrote:
> >   
> >> This option is enabled by default when -Wformat option is enabled.
> >> A user can specify -Wno-format-int-precision to disable emitting
> >> warnings when passing an argument of an incompatible integer type
> >> to a 'd', 'i', 'o', 'u', 'x', or 'X' conversion specifier when it
> >> has the same precision as the expected type.
> >>
> >> Signed-off-by: Daniil Stas 
> >>
> >> gcc/c-family/ChangeLog:
> >>
> >>* c-format.c (check_format_types): Don't emit warnings when
> >>passing an argument of an incompatible integer type to
> >>a 'd', 'i', 'o', 'u', 'x', or 'X' conversion specifier when
> >> it has the same precision as the expected type if
> >>-Wno-format-int-precision option is specified.
> >>* c.opt: Add -Wformat-int-precision option.
> >>
> >> gcc/ChangeLog:
> >>
> >>* doc/invoke.texi: Add -Wformat-int-precision option
> >> description.
> >>
> >> gcc/testsuite/ChangeLog:
> >>
> >>* c-c++-common/Wformat-int-precision-1.c: New test.
> >>* c-c++-common/Wformat-int-precision-2.c: New test.
> >> ---
> >> This is an update of patch "c-format: Add -Wformat-same-precision
> >> option [PR80060]". The changes comparing to the first patch
> >> version:
> >>
> >> - changed the option name to -Wformat-int-precision
> >> - changed the option description as was suggested by Martin
> >> - changed Wformat-int-precision-2.c to used dg-bogus instead of
> >> previous invalid syntax
> >>
> >> I also tried to combine the tests into one file with #pragma GCC
> >> diagnostic, but looks like it's not possible. I want to test that
> >> when passing just -Wformat option everything works as before my
> >> patch by default. And then in another test case to check that
> >> passing -Wno-format-int-precision disables the warning. But looks
> >> like in GCC you can't toggle the warnings such as
> >> -Wno-format-int-precision individually but only can disable the
> >> general -Wformat option that will disable all the formatting
> >> warnings together, which is not the proper test.  
> > 
> > Hi,
> > Can anyone review this patch?
> > Thank you  
> 
> I can't approve the change but it looks pretty good to me.
> 
> The documentation should wrap code symbols like int64_t, long,
> or printf in @code{} directives.
> 
> I don't think the first test needs to be restricted to just
> lp64, although I'd expect it to already be covered by the test
> suite.  The lp64 selector only tells us that int is 32 bits
> and long (and pointer) are 64, but nothing about long long so
> I suspect the test might fail on other targets.  There's llp64
> that's true for 4 byte ints and longs (but few targets match),
> and long_neq_int that's true when long is not the same size as
> int. So I think the inverse of the latter might be best, with
> int and long as arguments.  testsuite/lib/target-supports.exp
> defines these and others.
> 
> It might also be a good idea to add another case to the second
> test to exercise arguments with different precision to make
> sure -Wformat still triggers for those even  with
> -Wno-format-int-precision.
> 
> The -Wformat warnings are Joseph's domain (CC'd) so either he
> or some other C or global reviewer needs to sign off on changes
> in this area.  (Please ping the patch weekly until you get
> a response.)
> 
> Thanks
> Martin

Hi, Martin
Thanks for your response. I've sent an updated patch.

Best regards,
Daniil


[PATCH v3] c-format: Add -Wformat-int-precision option [PR80060]

2021-11-21 Thread Daniil Stas via Gcc-patches
This option is enabled by default when -Wformat option is enabled. A
user can specify -Wno-format-int-precision to disable emitting
warnings when passing an argument of an incompatible integer type to
a 'd', 'i', 'o', 'u', 'x', or 'X' conversion specifier when it has
the same precision as the expected type.

Signed-off-by: Daniil Stas 

gcc/c-family/ChangeLog:

* c-format.c (check_format_types): Don't emit warnings when
passing an argument of an incompatible integer type to
a 'd', 'i', 'o', 'u', 'x', or 'X' conversion specifier when it has
the same precision as the expected type if
-Wno-format-int-precision option is specified.
* c.opt: Add -Wformat-int-precision option.

gcc/ChangeLog:

* doc/invoke.texi: Add -Wformat-int-precision option description.

gcc/testsuite/ChangeLog:

* c-c++-common/Wformat-int-precision-1.c: New test.
* c-c++-common/Wformat-int-precision-2.c: New test.
---
Changes for v3:
  - Added additional @code{} derictives to the documentation where needed.
  - Changed tests to run on "! long_neq_int" target instead of "lp64".
  - Added a test case to check that gcc still emits warnings for arguments
  with different precision even with -Wno-format-int-precision option enabled.

Changes for v2:
  - Changed the option name to -Wformat-int-precision.
  - Changed the option description as was suggested by Martin.
  - Changed Wformat-int-precision-2.c to use dg-bogus instead of previous
  invalid syntax.

 gcc/c-family/c-format.c |  2 +-
 gcc/c-family/c.opt  |  6 ++
 gcc/doc/invoke.texi | 17 -
 .../c-c++-common/Wformat-int-precision-1.c  |  7 +++
 .../c-c++-common/Wformat-int-precision-2.c  |  8 
 5 files changed, 38 insertions(+), 2 deletions(-)
 create mode 100644 gcc/testsuite/c-c++-common/Wformat-int-precision-1.c
 create mode 100644 gcc/testsuite/c-c++-common/Wformat-int-precision-2.c

diff --git a/gcc/c-family/c-format.c b/gcc/c-family/c-format.c
index e735e092043..c66787f931f 100644
--- a/gcc/c-family/c-format.c
+++ b/gcc/c-family/c-format.c
@@ -4248,7 +4248,7 @@ check_format_types (const substring_loc _loc,
  && (!pedantic || i < 2)
  && char_type_flag)
continue;
-  if (types->scalar_identity_flag
+  if ((types->scalar_identity_flag || !warn_format_int_precision)
  && (TREE_CODE (cur_type) == TREE_CODE (wanted_type)
  || (INTEGRAL_TYPE_P (cur_type)
  && INTEGRAL_TYPE_P (wanted_type)))
diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
index 3976fc368db..0621585a4f9 100644
--- a/gcc/c-family/c.opt
+++ b/gcc/c-family/c.opt
@@ -684,6 +684,12 @@ C ObjC C++ LTO ObjC++ Warning Alias(Wformat-overflow=, 1, 
0) IntegerRange(0, 2)
 Warn about function calls with format strings that write past the end
 of the destination region.  Same as -Wformat-overflow=1.
 
+Wformat-int-precision
+C ObjC C++ ObjC++ Var(warn_format_int_precision) Warning LangEnabledBy(C ObjC 
C++ ObjC++,Wformat=,warn_format >= 1, 0)
+Warn when passing an argument of an incompatible integer type to a 'd', 'i',
+'o', 'u', 'x', or 'X' conversion specifier even when it has the same precision
+as the expected type.
+
 Wformat-security
 C ObjC C++ ObjC++ Var(warn_format_security) Warning LangEnabledBy(C ObjC C++ 
ObjC++,Wformat=, warn_format >= 2, 0)
 Warn about possible security problems with format functions.
diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index 4b1b58318f0..da69d804598 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -351,7 +351,7 @@ Objective-C and Objective-C++ Dialects}.
 -Werror  -Werror=*  -Wexpansion-to-defined  -Wfatal-errors @gol
 -Wfloat-conversion  -Wfloat-equal  -Wformat  -Wformat=2 @gol
 -Wno-format-contains-nul  -Wno-format-extra-args  @gol
--Wformat-nonliteral  -Wformat-overflow=@var{n} @gol
+-Wformat-nonliteral  -Wformat-overflow=@var{n} -Wformat-int-precision @gol
 -Wformat-security  -Wformat-signedness  -Wformat-truncation=@var{n} @gol
 -Wformat-y2k  -Wframe-address @gol
 -Wframe-larger-than=@var{byte-size}  -Wno-free-nonheap-object @gol
@@ -6113,6 +6113,21 @@ If @option{-Wformat} is specified, also warn if the 
format string is not a
 string literal and so cannot be checked, unless the format function
 takes its format arguments as a @code{va_list}.
 
+@item -Wformat-int-precision
+@opindex Wformat-int-precision
+@opindex Wno-format-int-precision
+Warn when passing an argument of an incompatible integer type to
+a @samp{d}, @samp{i}, @samp{o}, @samp{u}, @samp{x}, or @samp{X} conversion
+specifier even when it has the same precision as the expected type.
+For example, on targets where @code{int64_t} is a typedef for @code{long},
+the warning is issued for the @code{printf} call below even when both
+@code{long} and @code{long long} have the same size an

Re: [PATCH v2] c-format: Add -Wformat-int-precision option [PR80060]

2021-10-31 Thread Daniil Stas via Gcc-patches
On Sun, 10 Oct 2021 23:10:20 +
Daniil Stas  wrote:

> This option is enabled by default when -Wformat option is enabled. A
> user can specify -Wno-format-int-precision to disable emitting
> warnings when passing an argument of an incompatible integer type to
> a 'd', 'i', 'o', 'u', 'x', or 'X' conversion specifier when it has
> the same precision as the expected type.
> 
> Signed-off-by: Daniil Stas 
> 
> gcc/c-family/ChangeLog:
> 
>   * c-format.c (check_format_types): Don't emit warnings when
>   passing an argument of an incompatible integer type to
>   a 'd', 'i', 'o', 'u', 'x', or 'X' conversion specifier when
> it has the same precision as the expected type if
>   -Wno-format-int-precision option is specified.
>   * c.opt: Add -Wformat-int-precision option.
> 
> gcc/ChangeLog:
> 
>   * doc/invoke.texi: Add -Wformat-int-precision option
> description.
> 
> gcc/testsuite/ChangeLog:
> 
>   * c-c++-common/Wformat-int-precision-1.c: New test.
>   * c-c++-common/Wformat-int-precision-2.c: New test.
> ---
> This is an update of patch "c-format: Add -Wformat-same-precision
> option [PR80060]". The changes comparing to the first patch version:
> 
> - changed the option name to -Wformat-int-precision
> - changed the option description as was suggested by Martin
> - changed Wformat-int-precision-2.c to used dg-bogus instead of
> previous invalid syntax
> 
> I also tried to combine the tests into one file with #pragma GCC
> diagnostic, but looks like it's not possible. I want to test that
> when passing just -Wformat option everything works as before my patch
> by default. And then in another test case to check that passing
> -Wno-format-int-precision disables the warning. But looks like in GCC
> you can't toggle the warnings such as -Wno-format-int-precision
> individually but only can disable the general -Wformat option that
> will disable all the formatting warnings together, which is not the
> proper test.

Hi,
Can anyone review this patch?
Thank you

--
Daniil


[PATCH v2] c-format: Add -Wformat-int-precision option [PR80060]

2021-10-10 Thread Daniil Stas via Gcc-patches
This option is enabled by default when -Wformat option is enabled. A
user can specify -Wno-format-int-precision to disable emitting
warnings when passing an argument of an incompatible integer type to
a 'd', 'i', 'o', 'u', 'x', or 'X' conversion specifier when it has
the same precision as the expected type.

Signed-off-by: Daniil Stas 

gcc/c-family/ChangeLog:

* c-format.c (check_format_types): Don't emit warnings when
passing an argument of an incompatible integer type to
a 'd', 'i', 'o', 'u', 'x', or 'X' conversion specifier when it has
the same precision as the expected type if
-Wno-format-int-precision option is specified.
* c.opt: Add -Wformat-int-precision option.

gcc/ChangeLog:

* doc/invoke.texi: Add -Wformat-int-precision option description.

gcc/testsuite/ChangeLog:

* c-c++-common/Wformat-int-precision-1.c: New test.
* c-c++-common/Wformat-int-precision-2.c: New test.
---
This is an update of patch "c-format: Add -Wformat-same-precision option 
[PR80060]".
The changes comparing to the first patch version:

- changed the option name to -Wformat-int-precision
- changed the option description as was suggested by Martin
- changed Wformat-int-precision-2.c to used dg-bogus instead of previous invalid
syntax

I also tried to combine the tests into one file with #pragma GCC diagnostic,
but looks like it's not possible. I want to test that when passing just -Wformat
option everything works as before my patch by default. And then in another test
case to check that passing -Wno-format-int-precision disables the warning. But
looks like in GCC you can't toggle the warnings such as
-Wno-format-int-precision individually but only can disable the general
-Wformat option that will disable all the formatting warnings together, which
is not the proper test.

 gcc/c-family/c-format.c |  2 +-
 gcc/c-family/c.opt  |  6 ++
 gcc/doc/invoke.texi | 17 -
 .../c-c++-common/Wformat-int-precision-1.c  |  7 +++
 .../c-c++-common/Wformat-int-precision-2.c  |  7 +++
 5 files changed, 37 insertions(+), 2 deletions(-)
 create mode 100644 gcc/testsuite/c-c++-common/Wformat-int-precision-1.c
 create mode 100644 gcc/testsuite/c-c++-common/Wformat-int-precision-2.c

diff --git a/gcc/c-family/c-format.c b/gcc/c-family/c-format.c
index ca66c81f716..dd4436929f8 100644
--- a/gcc/c-family/c-format.c
+++ b/gcc/c-family/c-format.c
@@ -4243,7 +4243,7 @@ check_format_types (const substring_loc _loc,
  && (!pedantic || i < 2)
  && char_type_flag)
continue;
-  if (types->scalar_identity_flag
+  if ((types->scalar_identity_flag || !warn_format_int_precision)
  && (TREE_CODE (cur_type) == TREE_CODE (wanted_type)
  || (INTEGRAL_TYPE_P (cur_type)
  && INTEGRAL_TYPE_P (wanted_type)))
diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
index 06457ac739e..f5b4af3f3f6 100644
--- a/gcc/c-family/c.opt
+++ b/gcc/c-family/c.opt
@@ -660,6 +660,12 @@ C ObjC C++ LTO ObjC++ Warning Alias(Wformat-overflow=, 1, 
0) IntegerRange(0, 2)
 Warn about function calls with format strings that write past the end
 of the destination region.  Same as -Wformat-overflow=1.
 
+Wformat-int-precision
+C ObjC C++ ObjC++ Var(warn_format_int_precision) Warning LangEnabledBy(C ObjC 
C++ ObjC++,Wformat=,warn_format >= 1, 0)
+Warn when passing an argument of an incompatible integer type to a 'd', 'i',
+'o', 'u', 'x', or 'X' conversion specifier even when it has the same precision
+as the expected type.
+
 Wformat-security
 C ObjC C++ ObjC++ Var(warn_format_security) Warning LangEnabledBy(C ObjC C++ 
ObjC++,Wformat=, warn_format >= 2, 0)
 Warn about possible security problems with format functions.
diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index 8b3ebcfbc4f..05dec6ba832 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -348,7 +348,7 @@ Objective-C and Objective-C++ Dialects}.
 -Werror  -Werror=*  -Wexpansion-to-defined  -Wfatal-errors @gol
 -Wfloat-conversion  -Wfloat-equal  -Wformat  -Wformat=2 @gol
 -Wno-format-contains-nul  -Wno-format-extra-args  @gol
--Wformat-nonliteral  -Wformat-overflow=@var{n} @gol
+-Wformat-nonliteral  -Wformat-overflow=@var{n} -Wformat-int-precision @gol
 -Wformat-security  -Wformat-signedness  -Wformat-truncation=@var{n} @gol
 -Wformat-y2k  -Wframe-address @gol
 -Wframe-larger-than=@var{byte-size}  -Wno-free-nonheap-object @gol
@@ -6056,6 +6056,21 @@ If @option{-Wformat} is specified, also warn if the 
format string is not a
 string literal and so cannot be checked, unless the format function
 takes its format arguments as a @code{va_list}.
 
+@item -Wformat-int-precision
+@opindex Wformat-int-precision
+@opindex Wno-format-int-precision
+Warn when passing an argument of an incompatible integer type to
+a @sa

Re: [PATCH] c-format: Add -Wformat-same-precision option [PR80060]

2021-10-01 Thread Daniil Stas via Gcc-patches
Hi, Martin

On Thu, 30 Sep 2021 09:02:28 -0600
Martin Sebor  wrote:

> On 9/26/21 3:52 PM, Daniil Stas via Gcc-patches wrote:
> > This option is enabled by default when -Wformat option is enabled. A
> > user can specify -Wno-format-same-precision to disable emitting
> > warnings about an argument passed to printf-like function having a
> > different type from the one specified in the format string if the
> > types precisions are the same.  
> 
> Having an option to control this -Wformat aspect seems useful so
> just a few comments mostly on the wording/naming choices.
> 
> Coming up with good names is tricky but I wonder if we can find
> one that's clearer than "-Wformat-same-precision".  Precision can
> mean a few different things in this context:  in the representation
> of integers it refers to the number of value bits.  In that of
> floating types, it refers to the number of significand bits.  And
> in printf directives, it refers to what comes after the optional
> period and what controls the minimum number of digits to format
> (or maximum number of characters in a string).  So "same precision"
> seems rather vague (and the proposed manual entry doesn't make it
> clear).
> 
> IIUC, the option is specifically for directives that take integer
> arguments and controls whether using an argument of an incompatible
> integer type to a conversion specifier like i or x is diagnosed when
> the argument has the same precision as the expected type.
> 
> With that in mind, would mentioning the word integer (or just int
> for short) be an improvement?  E.g., -Wformat-int-precision?
> 

Yes, I like -Wformat-int-precision name too.

> Some more comments on the documentation text are below.
> 
> > 
> > Signed-off-by: Daniil Stas 
> > 
> > gcc/c-family/ChangeLog:
> > 
> > * c-format.c (check_format_types): Don't emit warnings about
> > type differences with the format string if
> > -Wno-format-same-precision is specified and the types have
> > the same precision.
> > * c.opt: Add -Wformat-same-precision option.
> > 
> > gcc/ChangeLog:
> > 
> > * doc/invoke.texi: Add -Wformat-same-precision option
> > description.
> > 
> > gcc/testsuite/ChangeLog:
> > 
> > * c-c++-common/Wformat-same-precision-1.c: New test.
> > * c-c++-common/Wformat-same-precision-2.c: New test.
> > ---
> >   gcc/c-family/c-format.c   | 2 +-
> >   gcc/c-family/c.opt| 5 +
> >   gcc/doc/invoke.texi   | 8 +++-
> >   gcc/testsuite/c-c++-common/Wformat-same-precision-1.c | 7 +++
> >   gcc/testsuite/c-c++-common/Wformat-same-precision-2.c | 7 +++
> >   5 files changed, 27 insertions(+), 2 deletions(-)
> >   create mode 100644
> > gcc/testsuite/c-c++-common/Wformat-same-precision-1.c create mode
> > 100644 gcc/testsuite/c-c++-common/Wformat-same-precision-2.c
> > 
> > diff --git a/gcc/c-family/c-format.c b/gcc/c-family/c-format.c
> > index b4cb765a9d3..07cdcefbef8 100644
> > --- a/gcc/c-family/c-format.c
> > +++ b/gcc/c-family/c-format.c
> > @@ -4243,7 +4243,7 @@ check_format_types (const substring_loc
> > _loc, && (!pedantic || i < 2)
> >   && char_type_flag)
> > continue;
> > -  if (types->scalar_identity_flag
> > +  if ((types->scalar_identity_flag ||
> > !warn_format_same_precision) && (TREE_CODE (cur_type) == TREE_CODE
> > (wanted_type) || (INTEGRAL_TYPE_P (cur_type)
> >   && INTEGRAL_TYPE_P (wanted_type)))
> > diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
> > index 9c151d19870..e7af7365c91 100644
> > --- a/gcc/c-family/c.opt
> > +++ b/gcc/c-family/c.opt
> > @@ -656,6 +656,11 @@ C ObjC C++ LTO ObjC++ Warning
> > Alias(Wformat-overflow=, 1, 0) IntegerRange(0, 2) Warn about
> > function calls with format strings that write past the end of the
> > destination region.  Same as -Wformat-overflow=1. 
> > +Wformat-same-precision
> > +C ObjC C++ ObjC++ Var(warn_format_same_precision) Warning
> > LangEnabledBy(C ObjC C++ ObjC++,Wformat=,warn_format >= 1, 0) +Warn
> > about type differences with the format string even if the types
> > +precision is the same.  
> 
> The grammar doesn't seem quite right here (I recommend to adjust
> the text as well along similar lines as the manual, except more
> brief as is customary here).
> 
> 
> > +
> >   Wformat-security
> >   C ObjC C++ ObjC++ Var(warn_format_security) Warning
> > LangEnabledBy(C 

[PATCH] c-format: Add -Wformat-same-precision option [PR80060]

2021-09-26 Thread Daniil Stas via Gcc-patches
This option is enabled by default when -Wformat option is enabled. A
user can specify -Wno-format-same-precision to disable emitting
warnings about an argument passed to printf-like function having a
different type from the one specified in the format string if the
types precisions are the same.

Signed-off-by: Daniil Stas 

gcc/c-family/ChangeLog:

* c-format.c (check_format_types): Don't emit warnings about
type differences with the format string if
-Wno-format-same-precision is specified and the types have
the same precision.
* c.opt: Add -Wformat-same-precision option.

gcc/ChangeLog:

* doc/invoke.texi: Add -Wformat-same-precision option description.

gcc/testsuite/ChangeLog:

* c-c++-common/Wformat-same-precision-1.c: New test.
* c-c++-common/Wformat-same-precision-2.c: New test.
---
 gcc/c-family/c-format.c   | 2 +-
 gcc/c-family/c.opt| 5 +
 gcc/doc/invoke.texi   | 8 +++-
 gcc/testsuite/c-c++-common/Wformat-same-precision-1.c | 7 +++
 gcc/testsuite/c-c++-common/Wformat-same-precision-2.c | 7 +++
 5 files changed, 27 insertions(+), 2 deletions(-)
 create mode 100644 gcc/testsuite/c-c++-common/Wformat-same-precision-1.c
 create mode 100644 gcc/testsuite/c-c++-common/Wformat-same-precision-2.c

diff --git a/gcc/c-family/c-format.c b/gcc/c-family/c-format.c
index b4cb765a9d3..07cdcefbef8 100644
--- a/gcc/c-family/c-format.c
+++ b/gcc/c-family/c-format.c
@@ -4243,7 +4243,7 @@ check_format_types (const substring_loc _loc,
  && (!pedantic || i < 2)
  && char_type_flag)
continue;
-  if (types->scalar_identity_flag
+  if ((types->scalar_identity_flag || !warn_format_same_precision)
  && (TREE_CODE (cur_type) == TREE_CODE (wanted_type)
  || (INTEGRAL_TYPE_P (cur_type)
  && INTEGRAL_TYPE_P (wanted_type)))
diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
index 9c151d19870..e7af7365c91 100644
--- a/gcc/c-family/c.opt
+++ b/gcc/c-family/c.opt
@@ -656,6 +656,11 @@ C ObjC C++ LTO ObjC++ Warning Alias(Wformat-overflow=, 1, 
0) IntegerRange(0, 2)
 Warn about function calls with format strings that write past the end
 of the destination region.  Same as -Wformat-overflow=1.
 
+Wformat-same-precision
+C ObjC C++ ObjC++ Var(warn_format_same_precision) Warning LangEnabledBy(C ObjC 
C++ ObjC++,Wformat=,warn_format >= 1, 0)
+Warn about type differences with the format string even if the types
+precision is the same.
+
 Wformat-security
 C ObjC C++ ObjC++ Var(warn_format_security) Warning LangEnabledBy(C ObjC C++ 
ObjC++,Wformat=, warn_format >= 2, 0)
 Warn about possible security problems with format functions.
diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index ba98eab68a5..8833f257d75 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -347,7 +347,7 @@ Objective-C and Objective-C++ Dialects}.
 -Werror  -Werror=*  -Wexpansion-to-defined  -Wfatal-errors @gol
 -Wfloat-conversion  -Wfloat-equal  -Wformat  -Wformat=2 @gol
 -Wno-format-contains-nul  -Wno-format-extra-args  @gol
--Wformat-nonliteral  -Wformat-overflow=@var{n} @gol
+-Wformat-nonliteral  -Wformat-overflow=@var{n} -Wformat-same-precision @gol
 -Wformat-security  -Wformat-signedness  -Wformat-truncation=@var{n} @gol
 -Wformat-y2k  -Wframe-address @gol
 -Wframe-larger-than=@var{byte-size}  -Wno-free-nonheap-object @gol
@@ -6054,6 +6054,12 @@ If @option{-Wformat} is specified, also warn if the 
format string is not a
 string literal and so cannot be checked, unless the format function
 takes its format arguments as a @code{va_list}.
 
+@item -Wformat-same-precision
+@opindex Wformat-same-precision
+@opindex Wno-format-same-precision
+If @option{-Wformat} is specified, warn about type differences with the format
+string even if the types precision is the same.
+
 @item -Wformat-security
 @opindex Wformat-security
 @opindex Wno-format-security
diff --git a/gcc/testsuite/c-c++-common/Wformat-same-precision-1.c 
b/gcc/testsuite/c-c++-common/Wformat-same-precision-1.c
new file mode 100644
index 000..fbc11e4200a
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/Wformat-same-precision-1.c
@@ -0,0 +1,7 @@
+/* { dg-do compile { target lp64 } } */
+/* { dg-options "-Wformat" } */
+
+void test ()
+{
+  __builtin_printf ("%lu\n", (long long) 1); /* { dg-warning "expects argument 
of type" } */
+}
diff --git a/gcc/testsuite/c-c++-common/Wformat-same-precision-2.c 
b/gcc/testsuite/c-c++-common/Wformat-same-precision-2.c
new file mode 100644
index 000..17e643e0441
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/Wformat-same-precision-2.c
@@ -0,0 +1,7 @@
+/* { dg-do compile { target lp64 } } */
+/* { dg-options "-Wformat -Wno-format-same-precision" } */
+
+void test ()
+{
+  __builtin_printf ("%lu\n", (long long) 1); /* { ! dg-warning "expects 
argument of type" } */
+}
-- 
2.33.0



[yakuake] [Bug 435544] Application focus issue

2021-09-23 Thread Stas Egorov
https://bugs.kde.org/show_bug.cgi?id=435544

--- Comment #9 from Stas Egorov  ---
(In reply to Andreas Sturmlechner from comment #8)
> Please test with 21.08.1.

There is no difference in behavior from previous versions.

-- 
You are receiving this mail because:
You are watching all bug changes.

[OAUTH-WG] Purpose of client authentication for "public" client types

2021-08-25 Thread STAS Thibault
Dear,

 

I notice that many API Gateway providers are requiring the authentication of
the client, even for public client types.

 

e.g.

 
https://docs.apigee.com/api-platform/security/oauth/implementing-password-gr
ant-type

 
https://auth0.com/docs/flows/call-your-api-using-resource-owner-password-flo
w

 
https://tyk.io/docs/basic-config-and-security/security/authentication-author
ization/oauth2-0/username-password-grant/

 

Not many providers are make the use of the client authentication optional,
as the client_secret is always present in either the Authorization Basic
header or within the payload.

 

What is the added value to perform client application authentication in the
context of "public" client type, like a vendor application sold to many
customers.?

The client_secret would be shipped along with the application, putting at
risk the secrecy of the client_secret.

 

The oAuth standard does not seem to provide a lot of guidance with regards
to the use and need of the client authentication in such context.

 

Would it not be preferable to recommend client identification rather than
client authentication in combination with resource-owner authentication ?

The client_id could be provided as part of the selected grant type
parameters.

 

 

 

 

Kind regards,

 

Thibault STAS 

SWIFT | Enterprise Architect - Information Technology

Tel: + 32 2 655 4975


 <http://www.swift.com> www.swift.com

This e-mail and any attachments thereto may contain information which is
confidential and/or proprietary and intended for the sole use of the
recipient(s) named above. If you have received this e-mail in error, please
immediately notify the sender and delete the mail.  Thank you for your
co-operation.  SWIFT reserves the right to retain e-mail messages on its
systems and, under circumstances permitted by applicable law, to monitor and
intercept e-mail messages to and from its systems.

 



smime.p7s
Description: S/MIME cryptographic signature
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


[yakuake] [Bug 435544] Application focus issue

2021-08-18 Thread Stas Egorov
https://bugs.kde.org/show_bug.cgi?id=435544

--- Comment #5 from Stas Egorov  ---
(In reply to Nikos Chantziaras from comment #4)

> I have the same issue with 21.08.0. I downgraded yakuake to 21.04.3 and it
> works fine again.
> 
> [...]
>
> The commit that introduces the bug is
> 9202df97322ae2f58104e387e914de15b06644ff ("Fix Yakuake icon appearing in
> taskbar through Qt::Tool window flag").

But I wrote this bugreport in April, and the commit you mentioned was added in
May.

-- 
You are receiving this mail because:
You are watching all bug changes.

Bug#991381: darktable: camera not showing in lens correction list and not recognised but is supported

2021-07-22 Thread Stas Zytkiewicz
Package: darktable
Version: 3.6.0-1.1
Severity: normal

Dear Maintainer,


I have a Canon 800D.
When I open a RAW image in darkroom the lens correction always fails to detect 
the camera. It will recognize the lens but not the camera. It tells me to add 
it manually.
But the list with Canon cameras doesn't show a 800D/Rebel T7i.
Looking in /usr/share/darktable/rawspeed/cameras.xml I see that the 800D is 
supported (also the 80D is in the xml but not in the list)
I have double checked that the exif data displays the correct camera type and 
the firmware in the camera is up to date.
Not sure why multiple cameras that are inside the xml file don't show up in the 
"manual add" camera list?

This is the out put of the exiftool:
exiftool IMG_1638.CR2 | grep -i "camera.*name"
Camera Model Name : Canon EOS 800D

The fix is to add the dependency for liblensfun-bin which contains the 
lensfun-update-data tool.
Then run the lensfun-update-data to update the lensfun dbase with all the 
supported cameras.
See also my guthub issue in the darktable repo:
https://github.com/darktable-org/darktable/issues/9562


-- System Information:
Debian Release: bullseye/sid
  APT prefers focal-updates
  APT policy: (500, 'focal-updates'), (500, 'focal-security'), (500, 'focal')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 5.4.0-77-generic (SMP w/8 CPU cores)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE, 
TAINT_UNSIGNED_MODULE
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_US:en (charmap=UTF-8)
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages darktable depends on:
ii  fonts-roboto 2:0~20170802-3
ii  iso-codes4.4-1
ii  libc62.31-0ubuntu9.2
ii  libcairo21.16.0-4ubuntu1
ii  libcolord-gtk1   0.2.0-0ubuntu1
ii  libcolord2   1.4.4-2
ii  libcups2 2.3.1-9ubuntu1.1
ii  libcurl3-gnutls  7.68.0-1ubuntu2.5
ii  libexiv2-27  0.27.2-8ubuntu2.4
ii  libgcc-s110.3.0-1ubuntu1~20.04
ii  libgdk-pixbuf2.0-0   2.40.0+dfsg-3ubuntu0.2
ii  libglib2.0-0 2.64.6-1~ubuntu20.04.3
ii  libgomp1 10.3.0-1ubuntu1~20.04
ii  libgphoto2-6 2.5.25-0ubuntu0.1
ii  libgphoto2-port122.5.25-0ubuntu0.1
ii  libgraphicsmagick-q16-3  1.4+really1.3.35-1
ii  libgtk-3-0   3.24.20-0ubuntu1
ii  libicu66 66.1-2ubuntu2
ii  libilmbase24 2.3.0-6build1
ii  libjpeg8 8c-2ubuntu8
ii  libjs-prototype  1.7.1-3
ii  libjs-scriptaculous  1.9.0-2
ii  libjson-glib-1.0-0   1.4.4-2ubuntu2
ii  liblcms2-2   2.9-4
ii  liblensfun1  0.3.2-5build1
ii  liblua5.3-0  5.3.3-1.1ubuntu2
ii  libopenexr24 2.3.0-6ubuntu0.5
ii  libopenjp2-7 2.3.1-1ubuntu4.20.04.1
ii  libosmgpsmap-1.0-1   1.1.0-6
ii  libpango-1.0-0   1.44.7-2ubuntu4
ii  libpangocairo-1.0-0  1.44.7-2ubuntu4
ii  libpng16-16  1.6.37-2
ii  libpugixml1v51.10-1
ii  librsvg2-2   2.48.9-1ubuntu0.20.04.1
ii  libsecret-1-00.20.4-0ubuntu1
ii  libsoup2.4-1 2.70.0-1
ii  libsqlite3-0 3.31.1-4ubuntu0.2
ii  libstdc++6   10.3.0-1ubuntu1~20.04
ii  libtiff5 4.1.0+git191117-2ubuntu0.20.04.1
ii  libwebp6 0.6.1-2ubuntu0.20.04.1
ii  libx11-6 2:1.6.9-2ubuntu1.2
ii  libxml2  2.9.10+dfsg-5ubuntu0.20.04.1
ii  libxrandr2   2:1.5.2-0ubuntu1
ii  zlib1g   1:1.2.11.dfsg-2ubuntu1.2

darktable recommends no packages.

darktable suggests no packages.

-- no debconf information



Re: [Vm] [Question #697665]: VM install on Ubuntu 20.04 fails - Outdated usage of ‘bbdb-search’

2021-06-27 Thread Stas Burdan
Question #697665 on VM changed:
https://answers.launchpad.net/vm/+question/697665

Status: Answered => Solved

Stas Burdan confirmed that the question is solved:
Thank you Mark,

this worked. I uninstalled the bbdb Ubuntu package and followed your 
instructions, had to install few 
missing dev tools, but it all worked at the end.

Thank you for the help - VM is up and running! Been using it since 2004

-- 
You received this question notification because your team VM development
team is an answer contact for VM.

___
Mailing list: https://launchpad.net/~vm
Post to : vm@lists.launchpad.net
Unsubscribe : https://launchpad.net/~vm
More help   : https://help.launchpad.net/ListHelp


Re: [Vm] [Question #697665]: VM install on Ubuntu 20.04 fails - Outdated usage of ‘bbdb-search’

2021-06-22 Thread Stas Burdan
Question #697665 on VM changed:
https://answers.launchpad.net/vm/+question/697665

Description changed to:
Hello All,

I am installing VM on Ubuntu 20.04,  and it fails with the following
message:

stas@captain:~/vm/vm-8.2.0b$ make

"emacs" -batch -q -no-site-file -no-init-file -l ./vm-build.el -f batch-
byte-compile vm-pcrisis.el

In toplevel form:
vm-pcrisis.el:86:1:Warning: defcustom for ‘vmpc-conditions’ fails to specify
type
vm-pcrisis.el:86:1:Warning: defcustom for ‘vmpc-conditions’ fails to specify
type
vm-pcrisis.el:228:35:Warning: make-face called with 2 arguments, but accepts
only 1
vm-pcrisis.el:235:31:Warning: make-face called with 2 arguments, but accepts
only 1
Outdated usage of ‘bbdb-search’
vm-pcrisis.el:1217:28:Error: Variable name missing after 
make[1]: *** [Makefile:114: vm-pcrisis.elc] Error 1
make[1]: Leaving directory '/home/stas/vm/vm-8.2.0b/lisp'
make: *** [Makefile:37: all] Error 1


I don't know what BBDB is and looking for the simplest possible work around so 
I can get VM up 
and running. I don't know Lisp, so can't rewrite stuff without steep learning 
curve.

Thank you in advance!

-- 
You received this question notification because your team VM development
team is an answer contact for VM.

___
Mailing list: https://launchpad.net/~vm
Post to : vm@lists.launchpad.net
Unsubscribe : https://launchpad.net/~vm
More help   : https://help.launchpad.net/ListHelp


[Vm] [Question #697665]: VM install on Ubuntu 20.04 fails - Outdated usage of ‘bbdb-search’

2021-06-22 Thread Stas Burdan
New question #697665 on VM:
https://answers.launchpad.net/vm/+question/697665

Hello All,

I am installing VM on Ubuntu 20.04,  and it fails with the following message:

stas@captain:~/vm/vm-8.2.0b$ make

"emacs" -batch -q -no-site-file -no-init-file -l ./vm-build.el -f 
batch-byte-compile vm-pcrisis.el

In toplevel form:
vm-pcrisis.el:86:1:Warning: defcustom for ‘vmpc-conditions’ fails to specify
type
vm-pcrisis.el:86:1:Warning: defcustom for ‘vmpc-conditions’ fails to specify
type
vm-pcrisis.el:228:35:Warning: make-face called with 2 arguments, but accepts
only 1
vm-pcrisis.el:235:31:Warning: make-face called with 2 arguments, but accepts
only 1
Outdated usage of ‘bbdb-search’
vm-pcrisis.el:1217:28:Error: Variable name missing after 
make[1]: *** [Makefile:114: vm-pcrisis.elc] Error 1
make[1]: Leaving directory '/home/stas/vm/vm-8.2.0b/lisp'
make: *** [Makefile:37: all] Error 1



Is I don't know what BBDB is and looking for the simplest possible work around 
so I can get VM up 
and running. I don't know Lisp, so can't rewrite stuff without steep learning 
curve.

Thank you in advance!

-- 
You received this question notification because your team VM development
team is an answer contact for VM.

___
Mailing list: https://launchpad.net/~vm
Post to : vm@lists.launchpad.net
Unsubscribe : https://launchpad.net/~vm
More help   : https://help.launchpad.net/ListHelp


Re: [OpenSIPS-Users] replace_body() issue

2021-06-18 Thread Stas Kobzar
Hello,
Just do not use ^ and $ in the search pattern. It is probably trying to
match the whole SDP packet, not single line.

On Fri, Jun 18, 2021 at 5:09 AM Miha via Users 
wrote:

> Hello
>
>  have issue with replace_body as it does not change SDP.
> My code looks like this:
>
> if (has_body("application/sdp")){
> if(search_body("a=inactive")){
>  *replace_body("^a=inactive$", "a=sendonly");*
>
> }
>
>  $var(rtpengine_flags) ="trust-address replace-origin
> replace-session-connection  ICE=remove RTP/AVP rtcp-mux-demux";
>  rtpengine_offer("$var(rtpengine_flags)");
>
>   if(is_audio_on_hold()) {
>
> rtpengine_play_media("callee file=/home/ringback.wav");
>   }
>
>  t_on_reply("1");
> }
>
> What could be wrong that inactive is not replaced by sendonly?
> On a leg I can see "a=inactive" and also on b leg "a=inactive".
>
>
> thank you
> miha
> ___
> Users mailing list
> Users@lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>
___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


[PATCH] net: dwc_eth_qos: Revert some changes of commit 3a97da12ee7b

2021-05-30 Thread Daniil Stas
Revert some changes of commit 3a97da12ee7b ("net: dwc_eth_qos: add dwc
eqos for imx support") that were probably added by mistake.

One of these changes can lead to received data corruption (enabling
FUP and FEP bits). Another causes invalid register rxq_ctrl0 settings
for some platforms. And another makes some writes at unknown memory
location.

Fixes: 3a97da12ee7b ("net: dwc_eth_qos: add dwc eqos for imx support")
Signed-off-by: Daniil Stas 
Cc: Ye Li 
Cc: Fugang Duan 
Cc: Peng Fan 
Cc: Ramon Fried 
Cc: Joe Hershberger 
Cc: Patrice Chotard 
Cc: Patrick Delaunay 
---
 drivers/net/dwc_eth_qos.c | 13 +
 1 file changed, 1 insertion(+), 12 deletions(-)

diff --git a/drivers/net/dwc_eth_qos.c b/drivers/net/dwc_eth_qos.c
index 2f088c758f..b012bed517 100644
--- a/drivers/net/dwc_eth_qos.c
+++ b/drivers/net/dwc_eth_qos.c
@@ -172,8 +172,6 @@ struct eqos_mtl_regs {
 #define EQOS_MTL_RXQ0_OPERATION_MODE_RFA_MASK  0x3f
 #define EQOS_MTL_RXQ0_OPERATION_MODE_EHFC  BIT(7)
 #define EQOS_MTL_RXQ0_OPERATION_MODE_RSF   BIT(5)
-#define EQOS_MTL_RXQ0_OPERATION_MODE_FEP   BIT(4)
-#define EQOS_MTL_RXQ0_OPERATION_MODE_FUP   BIT(3)
 
 #define EQOS_MTL_RXQ0_DEBUG_PRXQ_SHIFT 16
 #define EQOS_MTL_RXQ0_DEBUG_PRXQ_MASK  0x7fff
@@ -1222,7 +1220,6 @@ static int eqos_start(struct udevice *dev)
}
 
/* Configure MTL */
-   writel(0x60, >mtl_regs->txq0_quantum_weight - 0x100);
 
/* Enable Store and Forward mode for TX */
/* Program Tx operating mode */
@@ -1236,9 +1233,7 @@ static int eqos_start(struct udevice *dev)
 
/* Enable Store and Forward mode for RX, since no jumbo frame */
setbits_le32(>mtl_regs->rxq0_operation_mode,
-EQOS_MTL_RXQ0_OPERATION_MODE_RSF |
-EQOS_MTL_RXQ0_OPERATION_MODE_FEP |
-EQOS_MTL_RXQ0_OPERATION_MODE_FUP);
+EQOS_MTL_RXQ0_OPERATION_MODE_RSF);
 
/* Transmit/Receive queue fifo size; use all RAM for 1 queue */
val = readl(>mac_regs->hw_feature1);
@@ -1314,12 +1309,6 @@ static int eqos_start(struct udevice *dev)
eqos->config->config_mac <<
EQOS_MAC_RXQ_CTRL0_RXQ0EN_SHIFT);
 
-   clrsetbits_le32(>mac_regs->rxq_ctrl0,
-   EQOS_MAC_RXQ_CTRL0_RXQ0EN_MASK <<
-   EQOS_MAC_RXQ_CTRL0_RXQ0EN_SHIFT,
-   0x2 <<
-   EQOS_MAC_RXQ_CTRL0_RXQ0EN_SHIFT);
-
/* Multicast and Broadcast Queue Enable */
setbits_le32(>mac_regs->unused_0a4,
 0x0010);
-- 
2.31.1



Re: [PATCH] spi: stm32_qspi: Fix short data write operation

2021-05-24 Thread Daniil Stas
On Mon, 24 May 2021 09:40:05 +0200
Patrice CHOTARD  wrote:

> Hi Daniil
> 
> On 5/24/21 12:24 AM, Daniil Stas wrote:
> > TCF flag only means that all data was sent to FIFO. To check if the
> > data was sent out of FIFO we should also wait for the BUSY flag to
> > be cleared. Otherwise there is a race condition which can lead to
> > inability to write short (one byte long) data.
> > 
> > Signed-off-by: Daniil Stas 
> > Cc: Patrick Delaunay 
> > Cc: Patrice Chotard 
> > ---
> >  drivers/spi/stm32_qspi.c | 29 +++--
> >  1 file changed, 15 insertions(+), 14 deletions(-)
> > 
> > diff --git a/drivers/spi/stm32_qspi.c b/drivers/spi/stm32_qspi.c
> > index 4acc9047b9..8f4aabc3d1 100644
> > --- a/drivers/spi/stm32_qspi.c
> > +++ b/drivers/spi/stm32_qspi.c
> > @@ -148,23 +148,24 @@ static int _stm32_qspi_wait_cmd(struct
> > stm32_qspi_priv *priv, const struct spi_mem_op *op)
> >  {
> > u32 sr;
> > -   int ret;
> > -
> > -   if (!op->data.nbytes)
> > -   return _stm32_qspi_wait_for_not_busy(priv);
> > +   int ret = 0;
> >  
> > -   ret = readl_poll_timeout(>regs->sr, sr,
> > -sr & STM32_QSPI_SR_TCF,
> > -STM32_QSPI_CMD_TIMEOUT_US);
> > -   if (ret) {
> > -   log_err("cmd timeout (stat:%#x)\n", sr);
> > -   } else if (readl(>regs->sr) & STM32_QSPI_SR_TEF) {
> > -   log_err("transfer error (stat:%#x)\n", sr);
> > -   ret = -EIO;
> > +   if (op->data.nbytes) {
> > +   ret = readl_poll_timeout(>regs->sr, sr,
> > +sr & STM32_QSPI_SR_TCF,
> > +
> > STM32_QSPI_CMD_TIMEOUT_US);
> > +   if (ret) {
> > +   log_err("cmd timeout (stat:%#x)\n", sr);
> > +   } else if (readl(>regs->sr) &
> > STM32_QSPI_SR_TEF) {
> > +   log_err("transfer error (stat:%#x)\n", sr);
> > +   ret = -EIO;
> > +   }
> > +   /* clear flags */
> > +   writel(STM32_QSPI_FCR_CTCF | STM32_QSPI_FCR_CTEF,
> > >regs->fcr); }
> >  
> > -   /* clear flags */
> > -   writel(STM32_QSPI_FCR_CTCF | STM32_QSPI_FCR_CTEF,
> > >regs->fcr);
> > +   if (!ret)
> > +   ret = _stm32_qspi_wait_for_not_busy(priv);
> >  
> > return ret;
> >  }
> >   
> 
> Have you got a simple test to reproduce the described race condition ?
> 
> Thanks
> Patrice

Hi, Patrice

I found this issue on an stm32mp153 based board.
To reproduce it you need to set qspi peripheral clock to a low
value (for example 24 MHz).
Then you can test it in the u-boot console:

STM32MP> clk dump
Clocks:
...
- CK_PER : 24 MHz
...
- QSPI(10) => parent CK_PER(30)
...

STM32MP> sf probe
SF: Detected w25q32jv with page size 256 Bytes, erase size 64 KiB, total 4 MiB
STM32MP> sf erase 0x0030 +1
SF: 65536 bytes @ 0x30 Erased: OK
STM32MP> sf read 0xc410 0x30 10
device 0 offset 0x30, size 0x10
SF: 16 bytes @ 0x30 Read: OK
STM32MP> md.b 0xc410
c410: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
...
STM32MP> mw.b 0xc420 55
STM32MP> sf write 0xc420 0x0030 1
device 0 offset 0x30, size 0x1
SF: 1 bytes @ 0x30 Written: OK
STM32MP> sf read 0xc410 0x0030 10
device 0 offset 0x30, size 0x10
SF: 16 bytes @ 0x30 Read: OK
STM32MP> md.b 0xc410
c410: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
...


With my patch applied the last command result would be:
STM32MP> md.b 0xc410
c410: 55 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffU...

Thanks,
Daniil


[PATCH] spi: stm32_qspi: Fix short data write operation

2021-05-23 Thread Daniil Stas
TCF flag only means that all data was sent to FIFO. To check if the
data was sent out of FIFO we should also wait for the BUSY flag to be
cleared. Otherwise there is a race condition which can lead to
inability to write short (one byte long) data.

Signed-off-by: Daniil Stas 
Cc: Patrick Delaunay 
Cc: Patrice Chotard 
---
 drivers/spi/stm32_qspi.c | 29 +++--
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/drivers/spi/stm32_qspi.c b/drivers/spi/stm32_qspi.c
index 4acc9047b9..8f4aabc3d1 100644
--- a/drivers/spi/stm32_qspi.c
+++ b/drivers/spi/stm32_qspi.c
@@ -148,23 +148,24 @@ static int _stm32_qspi_wait_cmd(struct stm32_qspi_priv 
*priv,
const struct spi_mem_op *op)
 {
u32 sr;
-   int ret;
-
-   if (!op->data.nbytes)
-   return _stm32_qspi_wait_for_not_busy(priv);
+   int ret = 0;
 
-   ret = readl_poll_timeout(>regs->sr, sr,
-sr & STM32_QSPI_SR_TCF,
-STM32_QSPI_CMD_TIMEOUT_US);
-   if (ret) {
-   log_err("cmd timeout (stat:%#x)\n", sr);
-   } else if (readl(>regs->sr) & STM32_QSPI_SR_TEF) {
-   log_err("transfer error (stat:%#x)\n", sr);
-   ret = -EIO;
+   if (op->data.nbytes) {
+   ret = readl_poll_timeout(>regs->sr, sr,
+sr & STM32_QSPI_SR_TCF,
+STM32_QSPI_CMD_TIMEOUT_US);
+   if (ret) {
+   log_err("cmd timeout (stat:%#x)\n", sr);
+   } else if (readl(>regs->sr) & STM32_QSPI_SR_TEF) {
+   log_err("transfer error (stat:%#x)\n", sr);
+   ret = -EIO;
+   }
+   /* clear flags */
+   writel(STM32_QSPI_FCR_CTCF | STM32_QSPI_FCR_CTEF, 
>regs->fcr);
}
 
-   /* clear flags */
-   writel(STM32_QSPI_FCR_CTCF | STM32_QSPI_FCR_CTEF, >regs->fcr);
+   if (!ret)
+   ret = _stm32_qspi_wait_for_not_busy(priv);
 
return ret;
 }
-- 
2.31.0



[PATCH] net: dwc_eth_qos: Fix needless phy auto-negotiation restarts

2021-05-23 Thread Daniil Stas
Disabling clk_ck clock leads to link up status loss in phy, which
leads to auto-negotiation restart before each network command
execution.

This issue is especially big for PXE boot protocol because of
auto-negotiation restarts before each configuration filename trial.

To avoid this issue don't disable clk_ck clock after it was enabled.

Signed-off-by: Daniil Stas 
Cc: Ramon Fried 
Cc: Joe Hershberger 
Cc: Patrick Delaunay 
Cc: Patrice Chotard 
---
 drivers/net/dwc_eth_qos.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/dwc_eth_qos.c b/drivers/net/dwc_eth_qos.c
index e8242ca4e1..2f088c758f 100644
--- a/drivers/net/dwc_eth_qos.c
+++ b/drivers/net/dwc_eth_qos.c
@@ -321,6 +321,7 @@ struct eqos_priv {
void *rx_pkt;
bool started;
bool reg_access_ok;
+   bool clk_ck_enabled;
 };
 
 /*
@@ -591,12 +592,13 @@ static int eqos_start_clks_stm32(struct udevice *dev)
goto err_disable_clk_rx;
}
 
-   if (clk_valid(>clk_ck)) {
+   if (clk_valid(>clk_ck) && !eqos->clk_ck_enabled) {
ret = clk_enable(>clk_ck);
if (ret < 0) {
pr_err("clk_enable(clk_ck) failed: %d", ret);
goto err_disable_clk_tx;
}
+   eqos->clk_ck_enabled = true;
}
 #endif
 
@@ -648,8 +650,6 @@ static void eqos_stop_clks_stm32(struct udevice *dev)
clk_disable(>clk_tx);
clk_disable(>clk_rx);
clk_disable(>clk_master_bus);
-   if (clk_valid(>clk_ck))
-   clk_disable(>clk_ck);
 #endif
 
debug("%s: OK\n", __func__);
-- 
2.31.0



[yakuake] [Bug 435544] Application focus issue

2021-05-17 Thread Stas Egorov
https://bugs.kde.org/show_bug.cgi?id=435544

--- Comment #3 from Stas Egorov  ---
X11
WM is Xfwm 4.16
Compose extension is disabled

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [PATCH 5/8] net: dwc_eth_qos: add dwc eqos for imx support

2021-05-04 Thread Daniil Stas
Hi, i think there are some issues with this patch.

> @@ -1131,6 +1205,7 @@ static int eqos_start(struct udevice *dev)
>   }
>  
>   /* Configure MTL */
> + writel(0x60, >mtl_regs->txq0_quantum_weight - 0x100);
>  
>   /* Enable Store and Forward mode for TX */
>   /* Program Tx operating mode */

What is this address: >mtl_regs->txq0_quantum_weight - 0x100?
Isn't it outside of MTL registers range?

> @@ -1144,7 +1219,9 @@ static int eqos_start(struct udevice *dev)
>  
>   /* Enable Store and Forward mode for RX, since no jumbo frame */
>   setbits_le32(>mtl_regs->rxq0_operation_mode,
> -  EQOS_MTL_RXQ0_OPERATION_MODE_RSF);
> +  EQOS_MTL_RXQ0_OPERATION_MODE_RSF |
> +  EQOS_MTL_RXQ0_OPERATION_MODE_FEP |
> +  EQOS_MTL_RXQ0_OPERATION_MODE_FUP);
>  
>   /* Transmit/Receive queue fifo size; use all RAM for 1 queue */
>   val = readl(>mac_regs->hw_feature1);

Why do you set FEP and FUP bits? It can lead to data corruption as they
allow accepting erroneous packets.

I think these options should only be used in some debugging mode but not
in production.

> @@ -1220,6 +1297,19 @@ static int eqos_start(struct udevice *dev)
>   eqos->config->config_mac <<
>   EQOS_MAC_RXQ_CTRL0_RXQ0EN_SHIFT);
>  
> + clrsetbits_le32(>mac_regs->rxq_ctrl0,
> + EQOS_MAC_RXQ_CTRL0_RXQ0EN_MASK <<
> + EQOS_MAC_RXQ_CTRL0_RXQ0EN_SHIFT,
> + 0x2 <<
> + EQOS_MAC_RXQ_CTRL0_RXQ0EN_SHIFT);
> +

This line just overrides the value set in the previous line.
Is it a mistake?

> + /* enable promise mode */
> + setbits_le32(>mac_regs->unused_004[1],
> +  0x1);
> +

Isn't this mode also useful only for debugging?


[yakuake] [Bug 435544] Application focus issue

2021-04-09 Thread Stas Egorov
https://bugs.kde.org/show_bug.cgi?id=435544

Stas Egorov  changed:

   What|Removed |Added

URL||https://invent.kde.org/util
   ||ities/yakuake/-/issues/2

-- 
You are receiving this mail because:
You are watching all bug changes.

[yakuake] [Bug 435544] New: Application focus issue

2021-04-09 Thread Stas Egorov
https://bugs.kde.org/show_bug.cgi?id=435544

Bug ID: 435544
   Summary: Application focus issue
   Product: yakuake
   Version: 3.0.5
  Platform: Gentoo Packages
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: h...@kde.org
  Reporter: obiw...@vivaldi.net
  Target Milestone: ---

SUMMARY


STEPS TO REPRODUCE
1. Open Yakuake console
2. Switch to any other application
3. Minimize this application
4. Try to input any text

OBSERVED RESULT

No text is entered into the Yakuake window

EXPECTED RESULT

The text must be entered into the Yakuake window. Upon closer inspection, it
turns out that the Yakuake window is out of focus.

SOFTWARE/OS VERSIONS
Linux/KDE Plasma: Gentoo Linux with latest updates
KDE Plasma Version: -
KDE Frameworks Version: 5.77.0
Qt Version: 5.15.2

ADDITIONAL INFORMATION

Sometimes Yakuake stays in the foreground even if another window is in focus.
Because of this, when you enter a command into the console, it is entered into
the window that is in the background. This is very confusing.
At the same time, of course, the checkbox "always stay at the top" is removed.
I have it reproduced in 100% of cases when opening/minimizing another window
from the system tray if Yakuake was opened before. I use Xfwm & LXQt
environment, maybe this is the case?

-- 
You are receiving this mail because:
You are watching all bug changes.

[yakuake] [Bug 435542] Show button in taskbar

2021-04-09 Thread Stas Egorov
https://bugs.kde.org/show_bug.cgi?id=435542

Stas Egorov  changed:

   What|Removed |Added

URL||https://invent.kde.org/util
   ||ities/yakuake/-/issues/3

-- 
You are receiving this mail because:
You are watching all bug changes.

[yakuake] [Bug 435542] Show button in taskbar

2021-04-09 Thread Stas Egorov
https://bugs.kde.org/show_bug.cgi?id=435542

Stas Egorov  changed:

   What|Removed |Added

Version|3.0.5   |unspecified

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [OpenSIPS-Users] DID via OpenSIPS causing Asterisk to ask for authorization

2021-03-29 Thread Stas Kobzar
Hello Mark,

IMO, it is Asterisk side. Of course it depends on your setup but probably
you need Asterisk sip peer to opensips. Do not know for pjsip, for older
sip_chan it would be something like:

[opensips]
type=friend
deny=0.0.0.0/0.0.0.0
permit=OPENSIPS_IP/255.255.255.255
host=OPENSIPS_IP

You definitely can set this up with FreePBX web ui.



On Mon, Mar 29, 2021 at 10:24 AM Mark Allen  wrote:

> We have a DID. If an incoming INVITE goes via OpenSIPS, Asterisk returns
> '401 Unauthorized' requesting authorization credentials. If we map the DID
> direct to Asterisk it doesn't ask for authorization. Our setup is...
>
> DID ---> OpenSIPS 3.1 Mid_registrar ---> Asterisk (FreePBX)
>
> Is there something I need to configure on OpenSIPS or is it purely an
> Asterisk issue?
> ___
> Users mailing list
> Users@lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>
___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Re: [OpenSIPS-Users] 3.1 - Mid_Registrar - AOR throttling with WebRTC failing

2020-08-26 Thread Stas Kobzar
Hi Mark,

Glad to hear you made it all work! Looks like it was a real challenge.

Good luck,
Stas

On Wed, Aug 26, 2020 at 8:47 AM Mark Allen  wrote:

> Hi Stas - thanks for getting back to me. That helped me move forward a lot
> - particularly where you included what you see in the Path field - it
> helped to exclude a range of possible causes for the issues I was seeing.
>
> > If you do not have "path" set in your case the problem is probably
> there.
>
> Yes, because of how the "lumps" system works, and because WebRTC phone is
> connecting directly to OpenSIPS server, the incoming REGISTER didn't have a
> path, so to get the path saved in "location" I had to loop back to OpenSIPS
> again (thanks very much Johan De Clercq for filling in that part of the
> jigsaw). That then introduced some other problems that I had to resolve
> (particularly with RTPEngine going crazy looping back on itself and sending
> CPU temperature over 100degC - but that's another story!), but I've now got
> AOR throttling working with the mid-registrar successfully. Still a few
> bits to tweak with my script but it looks like I'm on the home straight.
> Thanks once again for all your help
>
> cheers,
>
> Mark
>
>
>
> On Fri, 21 Aug 2020 at 14:59, Stas Kobzar  wrote:
>
>> Hello Mark,
>>
>> In my case I do have a path in the location record. Here is my example
>> from "ul show" (I changed my real domain and IPs):
>> AOR:: 9...@example.com
>> Contact:: sip:suvp4v56@1p6pc0g6m3ml.invalid;transport=ws
>> Q=
>> Expires:: 494
>> Callid:: i1tmiaipa3l2nvvhmairvu
>> Cseq:: 28
>> User-agent:: JsSIP 3.5.3
>> Path:: > ;r2=on;lr>,> 10.0.0.213:47326>
>> State:: CS_SYNC
>> Flags:: 0
>> Cflags::
>> Socket:: udp:10.0.0.185:5060
>> Methods:: 5503
>> SIP_instance::
>> 
>>
>> And here is my mysql location record:
>>
>> mysql> select contact, path from locations where username =9170;
>>
>> ++--+
>> | contact| path
>>
>>   |
>>
>> ++--+
>> | sip:suvp4v56@1p6pc0g6m3ml.invalid;transport=ws |
>> ,> ;transport=wss;r2=on;lr;received=sip:107.179.246.213:47364> |
>>
>> ++--+
>>
>> If you do not have "path" set in your case the problem is probably there.
>> My lookup is not mid_register but it is close to what you have. I only
>> use parameter "m" to lookup in memory.
>>
>> On Fri, Aug 21, 2020 at 9:19 AM Mark Allen  wrote:
>>
>>> What am I looking for?
>>>
>>> INVITE from Asterisk to Opensips looks fine. Contact info from
>>> "location" matches that seen in console for web phone.
>>>
>>> Problem seems to be that the address is not recognised as a web socket
>>> rather than a host name. It's not NATed but tried fix_nated_register() and
>>> fix_nated_contact() but it made no difference.
>>>
>>> On Fri, 21 Aug 2020, 13:23 Slava Bendersky via Users, <
>>> users@lists.opensips.org> wrote:
>>>
>>>> Please check contact header.
>>>>
>>>> volga629
>>>>
>>>> --
>>>> *From: *"Mark Allen" 
>>>> *To: *"OpenSIPS users mailling list" 
>>>> *Sent: *Friday, August 21, 2020 8:08:18 AM
>>>> *Subject: *Re: [OpenSIPS-Users] 3.1 - Mid_Registrar - AOR throttling
>>>> withWebRTC failing
>>>>
>>>> I've not received any feedback on this regarding whether or not what
>>>> I'm doing should be working. Trying to find a workaround has just led to a
>>>> number of dead-ends. Can anyone please help me with this?
>>>> We are using mid-registrar with AOR Throttling talking to
>>>> Asterisk/FreePBX.

Re: [OpenSIPS-Users] 3.1 - Mid_Registrar - AOR throttling with WebRTC failing

2020-08-21 Thread Stas Kobzar
t; module, path, and AOR throttling so that it should work for calls
>>> originating from the main registrar?
>>>
>>> I'm stuck on how to move forward with this
>>>
>>> Cheers,
>>>
>>> Mark
>>>
>>> Relevant code snippets...
>>>
>>> loadmodule "mid_registrar.so"
>>> modparam("mid_registrar", "mode", 2) /* 0 = mirror / 1 = ct / 2 = AoR */
>>> modparam("mid_registrar", "outgoing_expires", 3600)
>>>
>>> add_path_received();
>>> $avp(returncode) = mid_registrar_save("location","p0v");
>>> switch ($avp(returncode)) {
>>> case 1:
>>> route(resolve_registrar);
>>> $ru = "sip:" + $avp(main_registrar) + ":5060";
>>> t_on_failure("1");
>>> t_relay();
>>> break;
>>> case 2:
>>> break;
>>> default:
>>> }
>>>
>>> if (!mid_registrar_lookup("location")) {
>>> t_reply(404, "Not Found");
>>> exit;
>>> }
>>>
>>>
>>> NB - route(resolve_registrar) sets the variable $avp(main_registrar) to
>>> the IP address of the Asterisk server
>>>
>>> On Thu, 30 Jul 2020 at 09:16, Mark Allen  wrote:
>>>
>>>> We are working on a test setup, hoping to move to a production system
>>>> in mid-August. We want to use mid-registrar AOR throttling. Users will
>>>> connect through OpenSIPS using a combination of SIP and WebRTC endpoints,
>>>> registering to an extension on an Asterisk main-registrar...
>>>>
>>>>   +--+
>>>> ---> |  |  +--+
>>>> ---> | OpenSIPS | ---> | Asterisk |
>>>>  ---> |  |  +--+
>>>>   +--+
>>>>
>>>> Multiple SIP phones (hardware or softphones) registering via an
>>>> OpenSIPS 3.1 mid_registration AOR is working fine. A call to the extension
>>>> number on Asterisk results in all mid-registered SIP extensions ringing and
>>>> when one answers, the other devices register a missed call. So far, so 
>>>> good.
>>>>
>>>> With 3.0 - we had a problem with WebRTC "phones" (even when just using
>>>> mid_registrar in "mirroring" mode). Webphone could register and call other
>>>> phones without a problem. However, calls to the WebPhone failed - there was
>>>> a problem with the WebSocket addressing giving "476 Unresolvable
>>>> destination" when the call originates from the main registrar - e.g. one
>>>> extension calling another. The /var/log/syslog entry said...
>>>>
>>>>   ERROR:core:sip_resolvehost: forced proto 6 not matching sips uri
>>>>   CRITICAL:core:mk_proxy: could not resolve hostname:
>>>> "4xp44jxl0qq0.invalid"
>>>>   ERROR:tm:uri2proxy: bad host name in URI >>> 4xp44jxl0qq0.invalid;rtcweb-breaker=yes;transport=wss>
>>>>   ERROR:tm:t_forward_nonack: failure to add branches
>>>>
>>>> Stas Kobar gave me a way to resolve this -
>>>> http://lists.opensips.org/pipermail/users/2020-July/043443.html  As we
>>>> were using 3.0, I used the "path" module and  "add_path_received()" to
>>>> handle this for WebRTC. This worked for a single device registered to an
>>>> address. However, as far as I could see, using "path" effectively bypassed
>>>> the "contact" address held in the OpenSIPS "location" table so it didn't
>>>> work for AOR throttling.
>>>>
>>>> I was hoping that, with mid_registrar on 3.1 baking in path support, I
>>>> could just use "mid_registrar_save('location','p0v')" to store the WebRTC
>>>> destination path in the "location" table. Then, with a call to the WebRTC
>>>> endpoint from the main registrar, "mid_registrar_lookup('location')" would
>>>> use the stored path from the "location" table to send traffic on to the
>>>> WebRTC phone and it would work fine with AOR throttling. However, that's
>>>> not happening, and looking at the "location" table, no path seems to
>>>> be being stored.
>>>>
>>>> If I register a WebRTC "phone" first, the path is included on the

Re: [OpenSIPS-Users] Flatstore files missing some calls

2020-08-06 Thread Stas Kobzar
Sorry, Vic
I was talking about a different module "db_text". I just did not get the
subject right.
I do not know about flatstore, sorry.

However, you can still check your permissions for "/var/log/acc". Or, jist
temporary use "/tmp" path to make sure this is not a permission problem.

On Thu, Aug 6, 2020 at 2:14 PM Vic Jolin  wrote:

> Staz,
>
> Hi thanks for the reply, I forgot I think to mention about my config
>
> loadmodule "db_flatstore.so"
> modparam("db_flatstore", "flush", 1)
> modparam("db_flatstore", "suffix", ".log_SERVERIP")
>
> loadmodule "acc.so"
> /* what special events should be accounted ? */
> modparam("acc", "early_media", 1)
> modparam("acc", "report_cancels", 1)
> /* by default we do not adjust the direct of the sequential requests.
>if you enable this parameter, be sure the enable "append_fromtag"
>in "rr" module */
> modparam("acc", "detect_direction", 0)
> modparam("acc", "extra_fields", "db: callerid->callerid; ani->ani;
> prefix->prefix; src_ip->src_ip; dst_ip->dst_ip; acctid->acctid;
> carrierid->carrierid; ruleid->ruleid; lrn->lrn; orig_ani->orig_ani")
> #modparam("acc", "extra_fields", "db: callerid->callerid; ani->ani;
> prefix->prefix; src_ip->src_ip; dst_ip->dst_ip; acctid->acctid;
> carrierid->carrierid; ruleid->ruleid;")
> modparam("acc", "db_url", "flatstore:/var/log/acc")
>
> Is there  a proper placement of
> do_accounting("db|log", "cdr|missed|failed");
>
> In my config I have this in the route before
>
> dp_translate($(avp(groupid){s.int}), "$rU", $rU, $var(dp_attr));
>
>
> On Fri, Aug 7, 2020 at 1:39 AM Stas Kobzar  wrote:
>
>> Hello,
>>
>> You should create the file with headers. You can copy required storage
>> file from here:
>> https://github.com/OpenSIPS/opensips/tree/master/scripts/dbtext/opensips
>>
>> And, of course, make sure you have good owner and permissions set to the
>> file.
>>
>> On Thu, Aug 6, 2020 at 1:26 PM Vic Jolin  wrote:
>>
>>> Hello,
>>>
>>> What are the reasons why flatstore files are not being created?
>>>
>>> Im  seeing this output in a binary journal file, and not from a normal
>>> log file I have my output logs in /var/log/messages (but we do not see it
>>> coming here as well)
>>>
>>>
>>>
>>> ACC: call ended:
>>> created=1596585092;call_start_time=1596585108;duration=5;ms_duration=5268;setuptime=16;method=INVITE;from_tag=13c1b24f27e408db;to_tag=ZtNe611a9391D;call_id=2a2ac4f263616c6c0015c430
>>>
>>> But no flatstore file created or updated
>>>
>>> But there is no flatstore files created. Is this a server issue? A
>>> resource like HD write speed? or some misconfiguration?
>>> ___
>>> Users mailing list
>>> Users@lists.opensips.org
>>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>>
>> ___
>> Users mailing list
>> Users@lists.opensips.org
>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>
> ___
> Users mailing list
> Users@lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>
___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Re: [OpenSIPS-Users] Flatstore files missing some calls

2020-08-06 Thread Stas Kobzar
By the way, you have to copy "version" flat text storage file too
corresponding your opensips version
and correct path set in configuration file
like: text:///opt/opensips/etc/opensips/db

On Thu, Aug 6, 2020 at 1:37 PM Stas Kobzar  wrote:

> Hello,
>
> You should create the file with headers. You can copy required storage
> file from here:
> https://github.com/OpenSIPS/opensips/tree/master/scripts/dbtext/opensips
>
> And, of course, make sure you have good owner and permissions set to the
> file.
>
> On Thu, Aug 6, 2020 at 1:26 PM Vic Jolin  wrote:
>
>> Hello,
>>
>> What are the reasons why flatstore files are not being created?
>>
>> Im  seeing this output in a binary journal file, and not from a normal
>> log file I have my output logs in /var/log/messages (but we do not see it
>> coming here as well)
>>
>>
>>
>> ACC: call ended:
>> created=1596585092;call_start_time=1596585108;duration=5;ms_duration=5268;setuptime=16;method=INVITE;from_tag=13c1b24f27e408db;to_tag=ZtNe611a9391D;call_id=2a2ac4f263616c6c0015c430
>>
>> But no flatstore file created or updated
>>
>> But there is no flatstore files created. Is this a server issue? A
>> resource like HD write speed? or some misconfiguration?
>> ___
>> Users mailing list
>> Users@lists.opensips.org
>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>
>
___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Re: [OpenSIPS-Users] Flatstore files missing some calls

2020-08-06 Thread Stas Kobzar
Hello,

You should create the file with headers. You can copy required storage file
from here:
https://github.com/OpenSIPS/opensips/tree/master/scripts/dbtext/opensips

And, of course, make sure you have good owner and permissions set to the
file.

On Thu, Aug 6, 2020 at 1:26 PM Vic Jolin  wrote:

> Hello,
>
> What are the reasons why flatstore files are not being created?
>
> Im  seeing this output in a binary journal file, and not from a normal log
> file I have my output logs in /var/log/messages (but we do not see it
> coming here as well)
>
>
>
> ACC: call ended:
> created=1596585092;call_start_time=1596585108;duration=5;ms_duration=5268;setuptime=16;method=INVITE;from_tag=13c1b24f27e408db;to_tag=ZtNe611a9391D;call_id=2a2ac4f263616c6c0015c430
>
> But no flatstore file created or updated
>
> But there is no flatstore files created. Is this a server issue? A
> resource like HD write speed? or some misconfiguration?
> ___
> Users mailing list
> Users@lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>
___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Re: [OpenSIPS-Users] OpenSIPS 3.1 - raise_event() crashes OpenSIPS with segmentation fault

2020-07-28 Thread Stas Kobzar
I mean, you are welcome, Mark :) sorry

On Tue, Jul 28, 2020 at 10:45 AM Mark Allen  wrote:

> [SOLVED]
>
> Hi Stas - good call! It's a change in behaviour from 3.0.
>
> In 3.0 documentation says...
>
> The next two parameters should be AVPs and they are optional. If only
> one is present, it should contain the values attached to the event.
>
> In 3.1 it removes mention of the behaviour if only one AVP is present, but
> it's not obvious and perhaps it could be documented in the
> https://www.opensips.org/Documentation/Migration-3-0-0-to-3-1-0 guide?
> Also, perhaps 3.1 should handle it better if parameter is missing rather
> than giving a segfault (something for 3.2?)
>
> Once again I'm indebted to you for your help
>
> thanks very much & all the best
>
> Mark
>
>
>
>
>
> On Tue, 28 Jul 2020 at 15:22, Stas Kobzar  wrote:
>
>> Hi Allen,
>>
>> Did you try with two parameters: name, value?
>>
>>*$avp(keys) = "registered";*
>>$avp(values) = "true";
>> xlog("Raised E_WFC_REGISTERED $avp(values)");
>> raise_event("E_WFC_REGISTERED", *$avp(keys)*, $avp(values));
>>
>> I know they are said to be optional in the documentation but probably it
>> is optional for two. Either no params or if you pass parameters, you have
>> to pass both.
>>
>>
>> On Tue, Jul 28, 2020 at 9:59 AM Mark Allen  wrote:
>>
>>> We're upgrading from 3.0 to 3.1. Everything seems ok except we get a
>>> weird error. We subscribe a dynamic event...
>>>
>>> startup_route {
>>>   subscribe_event("E_WFC_REGISTERED", "udp:127.0.0.1:");
>>> }
>>>
>>> which we can see works from /var/log/syslog...
>>>
>>> event_datagram:mod_init: initializing module ...
>>> core:evi_publish_event: Registered event >>
>>> and in the script we invoke it with...
>>>
>>> if(is_method("REGISTER")) {
>>> $avp(values) = "true";
>>> xlog("Raised E_WFC_REGISTERED $avp(values)");
>>> raise_event("E_WFC_REGISTERED",$avp(values));
>>>
>>> When a phone registers, raise_event() is triggered and OpenSIPS crashes
>>> with a segmentation fault - shown in /var/log/syslog...
>>>
>>> Raised E_WFC_REGISTERED true
>>> CRITICAL:core:sig_usr: segfault in process pid: 10525, id: 8
>>> segfault at 8 ip 55cef821313f sp 7ffcdf4d3410 error 4 in
>>> opensips[55cef801a000+264000]
>>> kernel: [197593.785622] Code: 0e 00 4c 89 ef e8 1b 70 fc ff 49 63 74
>>> 24 08 49 8b 3c 24 e8 51 a1 fc ff 48 89 c2 48 8d 35 8f 0d 07 00 4c 89 ef e8
>>> fb 6f fc ff <49> 8b 46 08 48 85 c0 74 0b 48 83 78 18 00 0f 84 a5 02 00 00
>>> e8 34
>>> INFO:core:handle_sigs: child process 10525 exited by a signal 11
>>> INFO:core:handle_sigs: core was generated
>>> INFO:core:handle_sigs: terminating due to SIGCHLD
>>>
>>> If I comment out the raise_event() line - OpenSIPS seems fine and
>>> doesn't crash when passing through this code.
>>>
>>>
>>>
>>> Running gdb to get core file backtrace we see...
>>>
>>> Core was generated by `/usr/sbin/opensips -P /run/opensips/opensips.pid
>>> -f /etc/opensips/opensips.cfg'.
>>> Program terminated with signal SIGSEGV, Segmentation fault.
>>> #0  evi_build_payload (params=0x0, method=0x7f931f5b6f08, id=id@entry=0,
>>> extra_param_k=extra_param_k@entry=0x0,
>>> extra_param_v=extra_param_v@entry=0x0) at evi/evi_transport.c:159
>>> 159 if (params->first && !params->first->name.s) {
>>> (gdb) bt full
>>> #0  evi_build_payload (params=0x0, method=0x7f931f5b6f08, id=id@entry=0,
>>> extra_param_k=extra_param_k@entry=0x0,
>>> extra_param_v=extra_param_v@entry=0x0) at evi/evi_transport.c:159
>>> param = 
>>> param_obj = 0x0
>>> tmp = 
>>> ret_obj = 0x7f9323135fe0
>>> payload = 0x0
>>> __FUNCTION__ = "evi_build_payload"
>>> #1  0x7f931b7d934f in datagram_raise (msg=,
>>> ev_name=, sock=0x7f931f5c54c8, params=)
>>> at event_datagram.c:315
>>> ret = 
>>> buf = 
>>> __FUNCTION__ = "datagram_raise"
>>> #2  0x55cef82148fb in evi_raise_event_msg (msg=msg@entry=0x7f9323134890,
>>> id=id@entr

Re: [OpenSIPS-Users] OpenSIPS 3.1 - raise_event() crashes OpenSIPS with segmentation fault

2020-07-28 Thread Stas Kobzar
Glad it helped. You are welcome, Allen

On Tue, Jul 28, 2020 at 10:45 AM Mark Allen  wrote:

> [SOLVED]
>
> Hi Stas - good call! It's a change in behaviour from 3.0.
>
> In 3.0 documentation says...
>
> The next two parameters should be AVPs and they are optional. If only
> one is present, it should contain the values attached to the event.
>
> In 3.1 it removes mention of the behaviour if only one AVP is present, but
> it's not obvious and perhaps it could be documented in the
> https://www.opensips.org/Documentation/Migration-3-0-0-to-3-1-0 guide?
> Also, perhaps 3.1 should handle it better if parameter is missing rather
> than giving a segfault (something for 3.2?)
>
> Once again I'm indebted to you for your help
>
> thanks very much & all the best
>
> Mark
>
>
>
>
>
> On Tue, 28 Jul 2020 at 15:22, Stas Kobzar  wrote:
>
>> Hi Allen,
>>
>> Did you try with two parameters: name, value?
>>
>>*$avp(keys) = "registered";*
>>$avp(values) = "true";
>> xlog("Raised E_WFC_REGISTERED $avp(values)");
>> raise_event("E_WFC_REGISTERED", *$avp(keys)*, $avp(values));
>>
>> I know they are said to be optional in the documentation but probably it
>> is optional for two. Either no params or if you pass parameters, you have
>> to pass both.
>>
>>
>> On Tue, Jul 28, 2020 at 9:59 AM Mark Allen  wrote:
>>
>>> We're upgrading from 3.0 to 3.1. Everything seems ok except we get a
>>> weird error. We subscribe a dynamic event...
>>>
>>> startup_route {
>>>   subscribe_event("E_WFC_REGISTERED", "udp:127.0.0.1:");
>>> }
>>>
>>> which we can see works from /var/log/syslog...
>>>
>>> event_datagram:mod_init: initializing module ...
>>> core:evi_publish_event: Registered event >>
>>> and in the script we invoke it with...
>>>
>>> if(is_method("REGISTER")) {
>>> $avp(values) = "true";
>>> xlog("Raised E_WFC_REGISTERED $avp(values)");
>>> raise_event("E_WFC_REGISTERED",$avp(values));
>>>
>>> When a phone registers, raise_event() is triggered and OpenSIPS crashes
>>> with a segmentation fault - shown in /var/log/syslog...
>>>
>>> Raised E_WFC_REGISTERED true
>>> CRITICAL:core:sig_usr: segfault in process pid: 10525, id: 8
>>> segfault at 8 ip 55cef821313f sp 7ffcdf4d3410 error 4 in
>>> opensips[55cef801a000+264000]
>>> kernel: [197593.785622] Code: 0e 00 4c 89 ef e8 1b 70 fc ff 49 63 74
>>> 24 08 49 8b 3c 24 e8 51 a1 fc ff 48 89 c2 48 8d 35 8f 0d 07 00 4c 89 ef e8
>>> fb 6f fc ff <49> 8b 46 08 48 85 c0 74 0b 48 83 78 18 00 0f 84 a5 02 00 00
>>> e8 34
>>> INFO:core:handle_sigs: child process 10525 exited by a signal 11
>>> INFO:core:handle_sigs: core was generated
>>> INFO:core:handle_sigs: terminating due to SIGCHLD
>>>
>>> If I comment out the raise_event() line - OpenSIPS seems fine and
>>> doesn't crash when passing through this code.
>>>
>>>
>>>
>>> Running gdb to get core file backtrace we see...
>>>
>>> Core was generated by `/usr/sbin/opensips -P /run/opensips/opensips.pid
>>> -f /etc/opensips/opensips.cfg'.
>>> Program terminated with signal SIGSEGV, Segmentation fault.
>>> #0  evi_build_payload (params=0x0, method=0x7f931f5b6f08, id=id@entry=0,
>>> extra_param_k=extra_param_k@entry=0x0,
>>> extra_param_v=extra_param_v@entry=0x0) at evi/evi_transport.c:159
>>> 159 if (params->first && !params->first->name.s) {
>>> (gdb) bt full
>>> #0  evi_build_payload (params=0x0, method=0x7f931f5b6f08, id=id@entry=0,
>>> extra_param_k=extra_param_k@entry=0x0,
>>> extra_param_v=extra_param_v@entry=0x0) at evi/evi_transport.c:159
>>> param = 
>>> param_obj = 0x0
>>> tmp = 
>>> ret_obj = 0x7f9323135fe0
>>> payload = 0x0
>>> __FUNCTION__ = "evi_build_payload"
>>> #1  0x7f931b7d934f in datagram_raise (msg=,
>>> ev_name=, sock=0x7f931f5c54c8, params=)
>>> at event_datagram.c:315
>>> ret = 
>>> buf = 
>>> __FUNCTION__ = "datagram_raise"
>>> #2  0x55cef82148fb in evi_raise_event_msg (msg=msg@entry=0x7f9323134890,
>>> id=id@entr

Re: [OpenSIPS-Users] OpenSIPS 3.1 - raise_event() crashes OpenSIPS with segmentation fault

2020-07-28 Thread Stas Kobzar
Hi Allen,

Did you try with two parameters: name, value?

   *$avp(keys) = "registered";*
   $avp(values) = "true";
xlog("Raised E_WFC_REGISTERED $avp(values)");
raise_event("E_WFC_REGISTERED", *$avp(keys)*, $avp(values));

I know they are said to be optional in the documentation but probably it is
optional for two. Either no params or if you pass parameters, you have to
pass both.


On Tue, Jul 28, 2020 at 9:59 AM Mark Allen  wrote:

> We're upgrading from 3.0 to 3.1. Everything seems ok except we get a weird
> error. We subscribe a dynamic event...
>
> startup_route {
>   subscribe_event("E_WFC_REGISTERED", "udp:127.0.0.1:");
> }
>
> which we can see works from /var/log/syslog...
>
> event_datagram:mod_init: initializing module ...
> core:evi_publish_event: Registered event 
> and in the script we invoke it with...
>
> if(is_method("REGISTER")) {
> $avp(values) = "true";
> xlog("Raised E_WFC_REGISTERED $avp(values)");
> raise_event("E_WFC_REGISTERED",$avp(values));
>
> When a phone registers, raise_event() is triggered and OpenSIPS crashes
> with a segmentation fault - shown in /var/log/syslog...
>
> Raised E_WFC_REGISTERED true
> CRITICAL:core:sig_usr: segfault in process pid: 10525, id: 8
> segfault at 8 ip 55cef821313f sp 7ffcdf4d3410 error 4 in
> opensips[55cef801a000+264000]
> kernel: [197593.785622] Code: 0e 00 4c 89 ef e8 1b 70 fc ff 49 63 74
> 24 08 49 8b 3c 24 e8 51 a1 fc ff 48 89 c2 48 8d 35 8f 0d 07 00 4c 89 ef e8
> fb 6f fc ff <49> 8b 46 08 48 85 c0 74 0b 48 83 78 18 00 0f 84 a5 02 00 00
> e8 34
> INFO:core:handle_sigs: child process 10525 exited by a signal 11
> INFO:core:handle_sigs: core was generated
> INFO:core:handle_sigs: terminating due to SIGCHLD
>
> If I comment out the raise_event() line - OpenSIPS seems fine and doesn't
> crash when passing through this code.
>
>
>
> Running gdb to get core file backtrace we see...
>
> Core was generated by `/usr/sbin/opensips -P /run/opensips/opensips.pid -f
> /etc/opensips/opensips.cfg'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  evi_build_payload (params=0x0, method=0x7f931f5b6f08, id=id@entry=0,
> extra_param_k=extra_param_k@entry=0x0,
> extra_param_v=extra_param_v@entry=0x0) at evi/evi_transport.c:159
> 159 if (params->first && !params->first->name.s) {
> (gdb) bt full
> #0  evi_build_payload (params=0x0, method=0x7f931f5b6f08, id=id@entry=0,
> extra_param_k=extra_param_k@entry=0x0,
> extra_param_v=extra_param_v@entry=0x0) at evi/evi_transport.c:159
> param = 
> param_obj = 0x0
> tmp = 
> ret_obj = 0x7f9323135fe0
> payload = 0x0
> __FUNCTION__ = "evi_build_payload"
> #1  0x7f931b7d934f in datagram_raise (msg=,
> ev_name=, sock=0x7f931f5c54c8, params=)
> at event_datagram.c:315
> ret = 
> buf = 
> __FUNCTION__ = "datagram_raise"
> #2  0x55cef82148fb in evi_raise_event_msg (msg=msg@entry=0x7f9323134890,
> id=id@entry=20, params=params@entry=0x0)
> at evi/event_interface.c:208
> subs = 0x7f931f5c55a8
> prev = 
> now = 1595943308
> flags = 1073741838
> pflags = 0
> ret = 0
> __FUNCTION__ = "evi_raise_event_msg"
> #3  0x55cef8216afb in evi_raise_script_event (msg=0x7f9323134890,
> id=20, _a=, _v=)
> at evi/event_interface.c:430
> vals = 
> attrs = 
> v_avp = 
> a_avp = 
> err = 
> val = {n = 587654904, s = {s = 0x7f932306e6f8 "\002", len =
> -133061172}}
> attr = {n = 0, s = {s = 0x0, len = -133445686}}
> at = 
> params = 0x0
> __FUNCTION__ = "evi_raise_script_event"
> #4  0x55cef8068c5f in w_raise_event (msg=,
> ev_id=, attrs_avp=,
> vals_avp=) at core_cmds.c:1204
> __FUNCTION__ = "w_raise_event"
> #5  0x55cef8086199 in do_action (a=0x7f932304d020, msg=0x7f9323134890)
> at action.c:972
> ret = 
> v = 
> i = 
> len = 
> cmatch = 
> aitem = 
> adefault = 
> spec = 
> val = {rs = {s = 0x7f932304c748 "\002", len = 0}, ri = -129751552,
> flags = 21966}
> start = {tv_sec = 94347416839552, tv_usec = 140269924432168}
> end_time = 
> cmd = 0x55cef832c550 
> acmd = 
> cmdp = {0x14, 0x7f932304cf88, 0x0, 0x2, 0x7f9323134890,
> 0x55cef80bb253 , 0x1, 0xc}
> tmp_vals = {{rs = {s = 0x4  address 0x4>, len = 587509104}, ri = 18, flags = 0}, {rs = {s =
> 0x7f9323134890 "\001", len = 587509104}, ri = 588466320, flags = 32659},
> {rs = {s = 0x55cef8442600 <_oser_err_info> "", len = -133061748}, ri =
> -131568035, flags = 21966}, {rs = {s = 0x3  address 0x3>, len = 587512256}, ri = 3, flags = 0}, {rs = {s =
> 0x7ffcdf4d3790 "\220H\023#\223\177", len = 587509104}, ri = -131568035,
> flags = 21966}, {rs = {s = 0x3  0x3>, len = 0}, ri 

Re: [OpenSIPS-Users] SIP to WebRTC via OpenSIPS mid-registrar fails: forced proto 6 not matching sips uri

2020-07-23 Thread Stas Kobzar
Hi Mark,

Glad it helped. I actually did not know about chan_sip option
supportpath because I was using additional opensips for wws to webrtc.
Thanks for the details.

Good luck

On Thu, Jul 23, 2020 at 9:19 AM Mark Allen  wrote:
>
> [SOLVED]
>
> Thanks Stas. Your workaround did solve the problem and I see that with 3.1 
> path support is baked into mid-registrar module as options to 
> mid_registrar_save().
>
> Once we added in the path module functionality, at first it didn't work. 
> Looking at sngrep traces we could see that the path information was appended 
> on the inbound route from OpenSIPS to Asterisk, but when Asterisk made the 
> call to the outbound destination it failed to include it as route info. This 
> was resolved by setting supportpath="yes" in sip.conf and worked with 
> CHAN_SIP. We tried to get it working with PJSIP without any joy but it's not 
> a priority for us at the moment so we'll have to investigate the cause later.
>
> Thanks for your help Stas, and thanks also to the others who took the time to 
> reply.
>
> cheers,
>
> Mark
>
>
> On Tue, 14 Jul 2020 at 16:23, Stas Kobzar  wrote:
>>
>> Hello Mark,
>>
>> I had a similar challenge. Using "path" module on both opensips helps
>> to overcome this problem.
>> https://opensips.org/docs/modules/3.2.x/path.html
>>
>> In your mid-registerer you need to enable path support. See "save"
>> function params p0 and v.
>> in your webrtc opensips use path module and function add_path_received
>>
>> On Tue, Jul 14, 2020 at 11:14 AM Mark Allen  wrote:
>> >
>> > I'm new to OpenSIPS and I've hit a problem I can't find a way past
>> >
>> > We have a test setup with an OpenSIPS mid-registrar in front of an 
>> > Asterisk PBX. Mid-registrar is currently in mode 1 (registration 
>> > throttling). We have SIP and WebRTC endpoints that we want to use.
>> >
>> > Current state is:
>> >
>> > REGISTER:  WebRTC webphone (Mizutech) -> OpenSIPS Mid-registrar -> 
>> > Asterisk  = success
>> > REGISTER:  SIP softphone (LinPhone)   -> OpenSIPS Mid-registrar -> 
>> > Asterisk  = success
>> >
>> > INVITE:SIP softphone-> OpenSIPS -> Asterisk -> OpenSIPS -> SIP 
>> > softphone   = success, call connects with audio both ways
>> > INVITE:WebRTC webphone  -> OpenSIPS -> Asterisk -> OpenSIPS -> SIP 
>> > softphone   = success, call connects with audio both ways
>> > INVITE:SIP softphone-> OpenSIPS -> Asterisk -> OpenSIPS -> WebRTC 
>> > webphone = fails with "476 Unresolvable destination"
>> >
>> > syslog messages:
>> > ERROR:core:sip_resolvehost: forced proto 6 not matching sips uri
>> > CRITICAL:core:mk_proxy: could not resolve hostname: "4xp44jxl0qq0.invalid"
>> > ERROR:tm:uri2proxy: bad host name in URI 
>> > 
>> > ERROR:tm:t_forward_nonack: failure to add branches
>> >
>> >
>> > Following past reports that I've found with a similar error, 
>> > fix_nated_contact() is run on INVITE messages just before rtpengine flags 
>> > are set and the t_relay() command, but it doesn't appear to make any 
>> > difference. If I change the t_relay() to t_relay(0x04,) to disable DNS 
>> > Failover, I still see the same errors in the log file. I've also checked 
>> > the record in the OpenSIPS DB "location" table and it seems to me that it 
>> > has the correct contact_id and contact info for the destination...
>> >
>> > contact_id: 2004383309156582802
>> > contact:
>> > sips:11001@4xp44jxl0qq0.invalid;rtcweb-breaker=yes;transport=wss
>> >
>> > I'm stuck on where I can go from here  - any help very much appreciated!
>> >
>> > thx
>> >
>> > Mark
>> >
>> >
>> > Setup:
>> > OpenSIPS 3.0.2 on Debian Buster
>> > RTPEngine Version: 8.4.0.0+0~mr8.4.0.0
>> >
>> > INVITE:
>> > 2020/07/14 14:22:06.176544 192.168.50.185:5060 -> 192.168.50.69:5060
>> > INVITE sip:11001@192.168.50.69:5060;ctid=2004383309156582802 SIP/2.0
>> > Via: SIP/2.0/UDP 
>> > 192.168.50.185:5060;rport;branch=z9hG4bKPj3e87a449-f4cc-4128-abbe-95706a1a44a0
>> > From: "11002" 
>> > ;tag=1c03916d-d086-479a-b984-ff5bbbf3aba8
>> > To: 
>> > Contact: 
>> > Call-ID: d1524788-cac2-4bea-a905-4e17ba006688
>> > CSeq: 24456 INVITE
>> > Allow: OPTIONS, REGIS

Re: [OpenSIPS-Users] SIP to WebRTC via OpenSIPS mid-registrar fails: forced proto 6 not matching sips uri

2020-07-14 Thread Stas Kobzar
I see, Mark. It is true, in my case, I splitted webrtc to other
opensips (newer version) as our platform was too old.
I still think path module function should help:
https://opensips.org/docs/modules/3.1.x/path.html#func_add_path_received

Good luck

On Tue, Jul 14, 2020 at 11:48 AM Mark Allen  wrote:
>
> Thanks Stas - I'll have a look at that.
>
> For clarification, we only have one OpenSIPS server acting as mid-registrar. 
> Endpoints register through it to extensions on Asterisk, and Asterisk acts as 
> B2BUA for calls from one extension to another. We've got a lot of additional 
> functionality linked to the Asterisk server so our main need for OpenSIPS is 
> to reduce unnecessary load (e.g. re-REGISTER from mobile devices).
>
> On Tue, 14 Jul 2020 at 16:23, Stas Kobzar  wrote:
>>
>> Hello Mark,
>>
>> I had a similar challenge. Using "path" module on both opensips helps
>> to overcome this problem.
>> https://opensips.org/docs/modules/3.2.x/path.html
>>
>> In your mid-registerer you need to enable path support. See "save"
>> function params p0 and v.
>> in your webrtc opensips use path module and function add_path_received
>>
>> On Tue, Jul 14, 2020 at 11:14 AM Mark Allen  wrote:
>> >
>> > I'm new to OpenSIPS and I've hit a problem I can't find a way past
>> >
>> > We have a test setup with an OpenSIPS mid-registrar in front of an 
>> > Asterisk PBX. Mid-registrar is currently in mode 1 (registration 
>> > throttling). We have SIP and WebRTC endpoints that we want to use.
>> >
>> > Current state is:
>> >
>> > REGISTER:  WebRTC webphone (Mizutech) -> OpenSIPS Mid-registrar -> 
>> > Asterisk  = success
>> > REGISTER:  SIP softphone (LinPhone)   -> OpenSIPS Mid-registrar -> 
>> > Asterisk  = success
>> >
>> > INVITE:SIP softphone-> OpenSIPS -> Asterisk -> OpenSIPS -> SIP 
>> > softphone   = success, call connects with audio both ways
>> > INVITE:WebRTC webphone  -> OpenSIPS -> Asterisk -> OpenSIPS -> SIP 
>> > softphone   = success, call connects with audio both ways
>> > INVITE:SIP softphone-> OpenSIPS -> Asterisk -> OpenSIPS -> WebRTC 
>> > webphone = fails with "476 Unresolvable destination"
>> >
>> > syslog messages:
>> > ERROR:core:sip_resolvehost: forced proto 6 not matching sips uri
>> > CRITICAL:core:mk_proxy: could not resolve hostname: "4xp44jxl0qq0.invalid"
>> > ERROR:tm:uri2proxy: bad host name in URI 
>> > 
>> > ERROR:tm:t_forward_nonack: failure to add branches
>> >
>> >
>> > Following past reports that I've found with a similar error, 
>> > fix_nated_contact() is run on INVITE messages just before rtpengine flags 
>> > are set and the t_relay() command, but it doesn't appear to make any 
>> > difference. If I change the t_relay() to t_relay(0x04,) to disable DNS 
>> > Failover, I still see the same errors in the log file. I've also checked 
>> > the record in the OpenSIPS DB "location" table and it seems to me that it 
>> > has the correct contact_id and contact info for the destination...
>> >
>> > contact_id: 2004383309156582802
>> > contact:
>> > sips:11001@4xp44jxl0qq0.invalid;rtcweb-breaker=yes;transport=wss
>> >
>> > I'm stuck on where I can go from here  - any help very much appreciated!
>> >
>> > thx
>> >
>> > Mark
>> >
>> >
>> > Setup:
>> > OpenSIPS 3.0.2 on Debian Buster
>> > RTPEngine Version: 8.4.0.0+0~mr8.4.0.0
>> >
>> > INVITE:
>> > 2020/07/14 14:22:06.176544 192.168.50.185:5060 -> 192.168.50.69:5060
>> > INVITE sip:11001@192.168.50.69:5060;ctid=2004383309156582802 SIP/2.0
>> > Via: SIP/2.0/UDP 
>> > 192.168.50.185:5060;rport;branch=z9hG4bKPj3e87a449-f4cc-4128-abbe-95706a1a44a0
>> > From: "11002" 
>> > ;tag=1c03916d-d086-479a-b984-ff5bbbf3aba8
>> > To: 
>> > Contact: 
>> > Call-ID: d1524788-cac2-4bea-a905-4e17ba006688
>> > CSeq: 24456 INVITE
>> > Allow: OPTIONS, REGISTER, SUBSCRIBE, NOTIFY, PUBLISH, INVITE, ACK, BYE, 
>> > CANCEL, UPDATE, PRACK, MESSAGE, REFER
>> > Supported: 100rel, timer, replaces, norefersub
>> > Session-Expires: 1800
>> > Min-SE: 90
>> > P-Asserted-Identity: "11002" 
>> > Max-Forwards: 70
>> > User-Agent: FPBX-15.0.16.63(16.9.0)
>> > Content-Type: applic

Re: [OpenSIPS-Users] SIP to WebRTC via OpenSIPS mid-registrar fails: forced proto 6 not matching sips uri

2020-07-14 Thread Stas Kobzar
Hello Mark,

I had a similar challenge. Using "path" module on both opensips helps
to overcome this problem.
https://opensips.org/docs/modules/3.2.x/path.html

In your mid-registerer you need to enable path support. See "save"
function params p0 and v.
in your webrtc opensips use path module and function add_path_received

On Tue, Jul 14, 2020 at 11:14 AM Mark Allen  wrote:
>
> I'm new to OpenSIPS and I've hit a problem I can't find a way past
>
> We have a test setup with an OpenSIPS mid-registrar in front of an Asterisk 
> PBX. Mid-registrar is currently in mode 1 (registration throttling). We have 
> SIP and WebRTC endpoints that we want to use.
>
> Current state is:
>
> REGISTER:  WebRTC webphone (Mizutech) -> OpenSIPS Mid-registrar -> Asterisk   
>= success
> REGISTER:  SIP softphone (LinPhone)   -> OpenSIPS Mid-registrar -> Asterisk   
>= success
>
> INVITE:SIP softphone-> OpenSIPS -> Asterisk -> OpenSIPS -> SIP 
> softphone   = success, call connects with audio both ways
> INVITE:WebRTC webphone  -> OpenSIPS -> Asterisk -> OpenSIPS -> SIP 
> softphone   = success, call connects with audio both ways
> INVITE:SIP softphone-> OpenSIPS -> Asterisk -> OpenSIPS -> WebRTC 
> webphone = fails with "476 Unresolvable destination"
>
> syslog messages:
> ERROR:core:sip_resolvehost: forced proto 6 not matching sips uri
> CRITICAL:core:mk_proxy: could not resolve hostname: "4xp44jxl0qq0.invalid"
> ERROR:tm:uri2proxy: bad host name in URI 
> 
> ERROR:tm:t_forward_nonack: failure to add branches
>
>
> Following past reports that I've found with a similar error, 
> fix_nated_contact() is run on INVITE messages just before rtpengine flags are 
> set and the t_relay() command, but it doesn't appear to make any difference. 
> If I change the t_relay() to t_relay(0x04,) to disable DNS Failover, I still 
> see the same errors in the log file. I've also checked the record in the 
> OpenSIPS DB "location" table and it seems to me that it has the correct 
> contact_id and contact info for the destination...
>
> contact_id: 2004383309156582802
> contact:sips:11001@4xp44jxl0qq0.invalid;rtcweb-breaker=yes;transport=wss
>
> I'm stuck on where I can go from here  - any help very much appreciated!
>
> thx
>
> Mark
>
>
> Setup:
> OpenSIPS 3.0.2 on Debian Buster
> RTPEngine Version: 8.4.0.0+0~mr8.4.0.0
>
> INVITE:
> 2020/07/14 14:22:06.176544 192.168.50.185:5060 -> 192.168.50.69:5060
> INVITE sip:11001@192.168.50.69:5060;ctid=2004383309156582802 SIP/2.0
> Via: SIP/2.0/UDP 
> 192.168.50.185:5060;rport;branch=z9hG4bKPj3e87a449-f4cc-4128-abbe-95706a1a44a0
> From: "11002" 
> ;tag=1c03916d-d086-479a-b984-ff5bbbf3aba8
> To: 
> Contact: 
> Call-ID: d1524788-cac2-4bea-a905-4e17ba006688
> CSeq: 24456 INVITE
> Allow: OPTIONS, REGISTER, SUBSCRIBE, NOTIFY, PUBLISH, INVITE, ACK, BYE, 
> CANCEL, UPDATE, PRACK, MESSAGE, REFER
> Supported: 100rel, timer, replaces, norefersub
> Session-Expires: 1800
> Min-SE: 90
> P-Asserted-Identity: "11002" 
> Max-Forwards: 70
> User-Agent: FPBX-15.0.16.63(16.9.0)
> Content-Type: application/sdp
> Content-Length:   411
>
> v=0
> o=- 263255642 263255642 IN IP4 192.168.50.185
> s=Asterisk
> c=IN IP4 192.168.50.185
> t=0 0
> m=audio 10292 RTP/AVPF 9 107 8 0 3 111 101
> a=rtpmap:9 G722/8000
> a=rtpmap:107 opus/48000/2
> a=fmtp:107 useinbandfec=1
> a=rtpmap:8 PCMA/8000
> a=rtpmap:0 PCMU/8000
> a=rtpmap:3 GSM/8000
> a=rtpmap:111 G726-32/8000
> a=rtpmap:101 telephone-event/8000
> a=fmtp:101 0-16
> a=ptime:20
> a=maxptime:20
> a=sendrecv
> a=rtcp-mux
>
>
> ___
> Users mailing list
> Users@lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users

___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Booting optee

2020-04-05 Thread Stas U
Hey guys,

I'm facing a problem trying to boot first optee and then the linux
kernel on a custom board based around an i.MX6q SoM. Besides some
information about overlapped physical memory optee boots fine and
hands over to the normal world where the kernel is being decompressed.
According to the comments in the optee source, memory regions of the
same type can overlap and will be merged. A longfile of the boot
process can be found at: https://pastebin.com/6PtttRPW

Afterwards, 1 in 10 times the kernel would boot. I suspect that optee
configures the memory somehow funny so after relocation of the kernel
it can't execute the kernel code. I suspect, since the relocation is
happening randomly once in a while it will boot. As suggested in the
IRC channel, I should use early boot for optee.

Sadly, I can't figure out where and how to tell the barebox pbl to
first boot optee. I found the early boot option in the config of
barebox. As far as I understand, the PBL will first boot optee instead
of barebox. As soon as optee hands over to the normal world, barebox
will execute and start the kernel.

The documentation (https://www.barebox.org/doc/latest/user/optee.html)
tells me, my board needs to call start_optee_early() with a valid tee
and fdt. I don't quite get where the transition between PBL and
barebox happens thus where this call should happen. Also I can't wrap
my head around at where to put optee and the FDT. Right now they are
located on the emmc, obviously I can't access the fs at this stage, so
I'd need to link them to the barebox binary and pass the relative
addresses?

Could someone pls give me some hints at where to look next since I'm
completely out of useful ideas.


Thank you
BS

___
barebox mailing list
barebox@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/barebox


Re: [gnome-cyr] Вопрос по работе с системой DL

2020-04-03 Thread Stas Solovey
Синхронизируется автоматом, переводить текущий выпуск, т.е  3.367:04, 3 апреля 2020 г., "Ivan Molodetskikh via gnome-cyr" :Здравствуйте!Какую ветку нужно переводить, master или gnome-3-36? Они как-то автоматически синхронизируются?Спасибо,Иван___gnome-cyr mailing listgnome-cyr@gnome.orghttps://mail.gnome.org/mailman/listinfo/gnome-cyr
___
gnome-cyr mailing list
gnome-cyr@gnome.org
https://mail.gnome.org/mailman/listinfo/gnome-cyr


[valgrind] [Bug 418756] New: MAP_FIXED_NOREPLACE mmap flag unsupported

2020-03-11 Thread Stas Sergeev
https://bugs.kde.org/show_bug.cgi?id=418756

Bug ID: 418756
   Summary: MAP_FIXED_NOREPLACE mmap flag unsupported
   Product: valgrind
   Version: 3.15 SVN
  Platform: Other
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: jsew...@acm.org
  Reporter: st...@yandex.ru
  Target Milestone: ---

SUMMARY
valgrind doesn't support MAP_FIXED_NOREPLACE mmap flag,
causing the programs that use it, to misbehave.

-- 
You are receiving this mail because:
You are watching all bug changes.

[Ubuntu-translations-coordinators] [Bug 1860037] Re: Typo in the Russian interface of the package

2020-01-17 Thread Stas Solovey
I will fix it in upstream as soon as possible, don't worry
You can report directly to https://l10n.gnome.org/teams/ru/

-- 
You received this bug notification because you are a member of Ubuntu
Translations Coordinators, which is subscribed to Ubuntu Translations.
Matching subscriptions: Ubuntu Translations bug mail
https://bugs.launchpad.net/bugs/1860037

Title:
  Typo in the Russian interface of the package

Status in Ubuntu Translations:
  Fix Released

Bug description:
  The Russian interface of the gnome-control-center package contains a typo. A 
typo is highlighted in the attached screenshot. Instead of "входе с систему" 
you must specify "входе в систему".
  Ubuntu version is 18.04.3 LTS
  gnome-control-center version is 1:3.28.2-0ubuntu0.18.04.5

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-translations/+bug/1860037/+subscriptions

___
Mailing list: https://launchpad.net/~ubuntu-translations-coordinators
Post to : ubuntu-translations-coordinators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~ubuntu-translations-coordinators
More help   : https://help.launchpad.net/ListHelp


[PHP-WEBMASTER] Bug #78911 [Opn->Wfx]: Please do not show e-mails from bug reporters in plain text in this site!

2019-12-05 Thread stas
Edit report at https://bugs.php.net/bug.php?id=78911=1

 ID: 78911
 Updated by: s...@php.net
 Reported by:oma2000 at hotmail dot com
 Summary:Please do not show e-mails from bug reporters in
 plain text in this site!
-Status: Open
+Status: Wont fix
 Type:   Bug
 Package:Website problem
 Operating System:   N/A
 PHP Version:Irrelevant
 Block user comment: N
 Private report: N

 New Comment:

If you have a problem with that, create a dedicated email address for PHP bug 
reporting (those can be had for free from about 9000 free email providers). 
Most of those also have pretty effective anti-spam filters.


Previous Comments:

[2019-12-04 12:27:31] oma2000 at hotmail dot com

Also, if I try to change my mail to prevent it from being on a public website, 
this e-mail is still being shown in the "History" section of the bug report, so 
I really can't remove it! Please, do not show e-mail addresses in the "History" 
section.


[2019-12-04 12:25:30] oma2000 at hotmail dot com

Description:

I just filed a bug and I just noticed my e-mail is publicly shown in plain 
text, just replacing "." with "dot" and "@" with "at".

Do you really think such a crude way of "obfuscating" an e-mail address is 
going to stop spammer bots from harvesting it?

The e-mail should not be visible at all to begin with!
But if you absolutely need to display the e-mail address, please use a more 
advanced way of mail address obfuscation.

Expected result:

E-mails should never be shown in a public website directly reachable from 
search engines.

Actual result:
--
Do not show e-mails in a public website.






--
Edit this bug report at https://bugs.php.net/bug.php?id=78911=1

-- 
PHP Webmaster List Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP-WEBMASTER] Sec Bug->Bug #78911 [Opn]: Please do not show e-mails from bug reporters in plain text in this site!

2019-12-05 Thread stas
Edit report at https://bugs.php.net/bug.php?id=78911=1

 ID: 78911
 Updated by: s...@php.net
 Reported by:oma2000 at hotmail dot com
 Summary:Please do not show e-mails from bug reporters in
 plain text in this site!
 Status: Open
-Type:   Security
+Type:   Bug
 Package:Website problem
 Operating System:   N/A
 PHP Version:Irrelevant
 Block user comment: N
 Private report: Y



Previous Comments:

[2019-12-04 12:27:31] oma2000 at hotmail dot com

Also, if I try to change my mail to prevent it from being on a public website, 
this e-mail is still being shown in the "History" section of the bug report, so 
I really can't remove it! Please, do not show e-mail addresses in the "History" 
section.


[2019-12-04 12:25:30] oma2000 at hotmail dot com

Description:

I just filed a bug and I just noticed my e-mail is publicly shown in plain 
text, just replacing "." with "dot" and "@" with "at".

Do you really think such a crude way of "obfuscating" an e-mail address is 
going to stop spammer bots from harvesting it?

The e-mail should not be visible at all to begin with!
But if you absolutely need to display the e-mail address, please use a more 
advanced way of mail address obfuscation.

Expected result:

E-mails should never be shown in a public website directly reachable from 
search engines.

Actual result:
--
Do not show e-mails in a public website.






--
Edit this bug report at https://bugs.php.net/bug.php?id=78911=1

-- 
PHP Webmaster List Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [Kicad-developers] GitLab migration

2019-12-03 Thread Seppe Stas
Just wanted to say I love this and I'm sure it will improve visibility of
the KiCad project and lower the barrier of entry for new developers.

Some questions:

   - Are there any plans to use the Gitlab CI system in the future?
   - Will the KiCad libraries also be migrated or is the plan to keep them
   on Github?

Just be sure to make some backups once in a while, Gitlab.com has some
history with Database issues.

Seppe

On Sat, Nov 30, 2019 at 8:59 PM Wayne Stambaugh 
wrote:

> This is intentional.  GitHub automatically appends the luanchpad url
> with is now a mirror of the GitLab repo.  I may change the mirror url to
> GitLab when I get a chance but that will only decrease the refresh latency.
>
> On 11/30/19 3:24 PM, Diego Herranz wrote:
> > Just a quick comment.
> >
> > The description of https://github.com/KiCad/kicad-source-mirror has been
> > updated to mention gitlab, but still has a link to launchpad. Is that
> > intentional?
> >
> > Thanks,
> > Diego
> >
> > On Fri, 29 Nov 2019, 18:25 Wayne Stambaugh,  > > wrote:
> >
> > We will have to figure this out as we go.  What ever platform we
> use, it
> > will not be the free for all that we currently have.
> >
> > On 11/29/2019 1:19 PM, Jon Evans wrote:
> > > As far as I know, there is not fine-grained access control on Wiki
> > > pages.  The only way to do something like this to create a separate
> > > project just for a public wiki.  Then a limited set of people
> > would have
> > > permissions to copy things from the public wiki to the main KiCad
> > > project wiki.
> > >
> > > To be honest, that sounds less useful than the current status quo,
> > which
> > > is to use Google Docs for pre-implementation design collaboration.
> > > I know not everyone likes Google or wants to create an account,
> > and I'd
> > > be happy to try alternatives that have the same functionality.
> But, I
> > > think it's important to have similar functionality: A few people
> have
> > > access to edit, and more people (i.e. the public) can only make
> > > suggestions or comments.
> > >
> > > -Jon
> > >
> > > On Fri, Nov 29, 2019 at 1:09 PM Simon Richter
> > mailto:simon.rich...@hogyros.de>
> > >  > >> wrote:
> > >
> > > Hi Wayne,
> > >
> > > On Fri, Nov 29, 2019 at 12:49:30PM -0500, Wayne Stambaugh
> wrote:
> > >
> > > > I will also be disabling the Launchpad blueprint and answers
> > pages as
> > > > well.  We not going to migrate the blueprints to GitLab
> > because the
> > > > entire blueprint system is a mess due to the lack of sane
> > permissions.
> > > > We may have to manually migrate the useful blueprints to
> GitLab
> > > once we
> > > > have a reasonable process for doing so.
> > >
> > > One of my main hopes for the migration is to get a workflow for
> > > collaborative pre-implementation design. Blueprints were a nice
> > > idea, but
> > > they were never fully implemented on Launchpad, and it shows.
> > >
> > > It'd probably make sense to use the Wiki for that, do you know
> > if it is
> > > possible for non-committers to have limited Wiki editing
> > rights (e.g. an
> > > "ideas" namespace with looser permissions, and moving the page
> > to the
> > > "roadmap" namespace later)?
> > >
> > >Simon
> > >
> > > ___
> > > Mailing list: https://launchpad.net/~kicad-developers
> > > Post to : kicad-developers@lists.launchpad.net
> > 
> > >  > >
> > > Unsubscribe : https://launchpad.net/~kicad-developers
> > > More help   : https://help.launchpad.net/ListHelp
> > >
> > >
> > > ___
> > > Mailing list: https://launchpad.net/~kicad-developers
> > > Post to : kicad-developers@lists.launchpad.net
> > 
> > > Unsubscribe : https://launchpad.net/~kicad-developers
> > > More help   : https://help.launchpad.net/ListHelp
> > >
> >
> > ___
> > Mailing list: https://launchpad.net/~kicad-developers
> > Post to : kicad-developers@lists.launchpad.net
> > 
> > Unsubscribe : https://launchpad.net/~kicad-developers
> > More help   : https://help.launchpad.net/ListHelp
> >
>
> ___
> Mailing list: https://launchpad.net/~kicad-developers
> Post to : 

[jira] [Updated] (THRIFT-5025) Incomplete promise rejection

2019-11-21 Thread Stas Sribnyi (Jira)


 [ 
https://issues.apache.org/jira/browse/THRIFT-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stas Sribnyi updated THRIFT-5025:
-
Description: 
I have a problem while performing client requests created with HttpConnection.

In case of some errors like 'ECONNREFUSED', 'Connection timed out' and some 
other errors http_connection *emits an error, but nothing handles it*, 
therefore promise does not reject and in fact just hangs, so we have memory 
leaks.

Looks like promise can only be rejected only with a call of clientCallback
 
[https://github.com/apache/thrift/blob/41f47aff7ccc1a093eb5e48250377c1178babeec/lib/nodejs/lib/thrift/http_connection.js#L140]

As far as I did not find any example of such cases (multiplexer+http 
connection+node), it is possible that I use it in the wrong way, but from the 
source code perspective it looks like a design flaw, could you please take a 
look into this issue?

Here is a simplified source code I use:
{code:java}
const multiplexer = new Multiplexer();
const connection = new HttpConnection({
  path: '/rpc',
  transport: TBufferedTransport,
  protocol: TBinaryProtocol,
  nodeOptions: {
host: '127.0.0.1',
port: 8989,
  },
});

const client = multiplexer.createClient(
  'UserClient',
  UserClient,
  connection
);

try {
  const user = await client.registerUser({
// Some props related to user
  } as any);

  // this only will be executed if registerUser performs without any errors.
  // perform some actions with response
} catch (error) { // this never happens in case of connection issues and some 
internal errors of http connection
  // log error 
}
{code}
 

  was:
I have a problem while performing client requests created with Multiplexer and 
HttpConnection.

In case of some errors like 'ECONNREFUSED', 'Connection timed out' and some 
other errors http_connection *emits an error, but nothing handles it*, 
therefore promise does not reject and in fact just hangs, so we have memory 
leaks.

Looks like promise can only be rejected only with a call of clientCallback
 
[https://github.com/apache/thrift/blob/41f47aff7ccc1a093eb5e48250377c1178babeec/lib/nodejs/lib/thrift/http_connection.js#L140]

As far as I did not find any example of such cases (multiplexer+http 
connection+node), it is possible that I use it in the wrong way, but from the 
source code perspective it looks like a design flaw, could you please take a 
look into this issue?

Here is a simplified source code I use:
{code:java}
const multiplexer = new Multiplexer();
const connection = new HttpConnection({
  path: '/rpc',
  transport: TBufferedTransport,
  protocol: TBinaryProtocol,
  nodeOptions: {
host: '127.0.0.1',
port: 8989,
  },
});

const client = multiplexer.createClient(
  'UserClient',
  UserClient,
  connection
);

try {
  const user = await client.registerUser({
// Some props related to user
  } as any);

  // this only will be executed if registerUser performs without any errors.
  // perform some actions with response
} catch (error) { // this never happens in case of connection issues and some 
internal errors of http connection
  // log error 
}
{code}
 


> Incomplete promise rejection
> 
>
> Key: THRIFT-5025
> URL: https://issues.apache.org/jira/browse/THRIFT-5025
> Project: Thrift
>  Issue Type: Bug
>  Components: Node.js - Library
>Affects Versions: 0.12.0
>Reporter: Stas Sribnyi
>Priority: Major
>  Labels: critical
>
> I have a problem while performing client requests created with HttpConnection.
> In case of some errors like 'ECONNREFUSED', 'Connection timed out' and some 
> other errors http_connection *emits an error, but nothing handles it*, 
> therefore promise does not reject and in fact just hangs, so we have memory 
> leaks.
> Looks like promise can only be rejected only with a call of clientCallback
>  
> [https://github.com/apache/thrift/blob/41f47aff7ccc1a093eb5e48250377c1178babeec/lib/nodejs/lib/thrift/http_connection.js#L140]
> As far as I did not find any example of such cases (multiplexer+http 
> connection+node), it is possible that I use it in the wrong way, but from the 
> source code perspective it looks like a design flaw, could you please take a 
> look into this issue?
> Here is a simplified source code I use:
> {code:java}
> const multiplexer = new Multiplexer();
> const connection = new HttpConnection({
>   path: '/rpc',
>   transport: TBufferedTransport,
>   protocol: TBinaryProtocol,
>   nodeOptions: {
> host: '127.0.0.1',
> port: 8989,
>   },
&

[jira] [Updated] (THRIFT-5025) Incomplete promise rejection

2019-11-21 Thread Stas Sribnyi (Jira)


 [ 
https://issues.apache.org/jira/browse/THRIFT-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stas Sribnyi updated THRIFT-5025:
-
Description: 
I have a problem while performing client requests created with Multiplexer and 
HttpConnection.

In case of some errors like 'ECONNREFUSED', 'Connection timed out' and some 
other errors http_connection *emits an error, but nothing handles it*, 
therefore promise does not reject and in fact just hangs, so we have memory 
leaks.

Looks like promise can only be rejected only with a call of clientCallback
 
[https://github.com/apache/thrift/blob/41f47aff7ccc1a093eb5e48250377c1178babeec/lib/nodejs/lib/thrift/http_connection.js#L140]

As far as I did not find any example of such cases (multiplexer+http 
connection+node), it is possible that I use it in the wrong way, but from the 
source code perspective it looks like a design flaw, could you please take a 
look into this issue?

Here is a simplified source code I use:
{code:java}
const multiplexer = new Multiplexer();
const connection = new HttpConnection({
  path: '/rpc',
  transport: TBufferedTransport,
  protocol: TBinaryProtocol,
  nodeOptions: {
host: '127.0.0.1',
port: 8989,
  },
});

const client = multiplexer.createClient(
  'UserClient',
  UserClient,
  connection
);

try {
  const user = await client.registerUser({
// Some props related to user
  } as any);

  // this only will be executed if registerUser performs without any errors.
  // perform some actions with response
} catch (error) { // this never happens in case of connection issues and some 
internal errors of http connection
  // log error 
}
{code}
 

  was:
I have a problem while performing client requests created with Multiplexer and 
HttpConnection.

In case of some errors like 'ECONNREFUSED', 'Connection timed out' and some 
other errors http_connection *emits an error, but nothing handles it*, 
therefore promise does not reject and in fact just hangs, so we have memory 
leaks.

Looks like promise can only be rejected only with a call of clientCallback
[https://github.com/apache/thrift/blob/41f47aff7ccc1a093eb5e48250377c1178babeec/lib/nodejs/lib/thrift/http_connection.js#L140]

As far as I did not find any example of such cases (multiplexer+http 
connection+node), it is possible that I use it in the wrong way, but from the 
source code perspective it looks like a design flaw, could you please take a 
look into this issue?

Here is a simplified source code I use:
{code:java}
const multiplexer = new Multiplexer();
const connection = new HttpConnection({
  path: '/rpc',
  transport: TBufferedTransport,
  protocol: TBinaryProtocol,
  nodeOptions: {
host: '127.0.0.1',
port: 8989,
  },
});

const client = multiplexer.createClient(
  'UserClient',
  UserClient,
  connection
);

try {
  const user = await client.registerUser({
// Some props related to user
  } as any);

  // perform some actions with response
} catch (error) { // this never happens in case of connection issues and some 
internal errors of http connection
  // log error 
}
{code}
 


> Incomplete promise rejection
> 
>
> Key: THRIFT-5025
> URL: https://issues.apache.org/jira/browse/THRIFT-5025
> Project: Thrift
>  Issue Type: Bug
>  Components: Node.js - Library
>Affects Versions: 0.12.0
>Reporter: Stas Sribnyi
>Priority: Major
>  Labels: critical
>
> I have a problem while performing client requests created with Multiplexer 
> and HttpConnection.
> In case of some errors like 'ECONNREFUSED', 'Connection timed out' and some 
> other errors http_connection *emits an error, but nothing handles it*, 
> therefore promise does not reject and in fact just hangs, so we have memory 
> leaks.
> Looks like promise can only be rejected only with a call of clientCallback
>  
> [https://github.com/apache/thrift/blob/41f47aff7ccc1a093eb5e48250377c1178babeec/lib/nodejs/lib/thrift/http_connection.js#L140]
> As far as I did not find any example of such cases (multiplexer+http 
> connection+node), it is possible that I use it in the wrong way, but from the 
> source code perspective it looks like a design flaw, could you please take a 
> look into this issue?
> Here is a simplified source code I use:
> {code:java}
> const multiplexer = new Multiplexer();
> const connection = new HttpConnection({
>   path: '/rpc',
>   transport: TBufferedTransport,
>   protocol: TBinaryProtocol,
>   nodeOptions: {
> host: '127.0.0.1',
> port: 8989,
>   },
> });
> const client = multiplex

[jira] [Created] (THRIFT-5025) Incomplete promise rejection

2019-11-21 Thread Stas Sribnyi (Jira)
Stas Sribnyi created THRIFT-5025:


 Summary: Incomplete promise rejection
 Key: THRIFT-5025
 URL: https://issues.apache.org/jira/browse/THRIFT-5025
 Project: Thrift
  Issue Type: Bug
  Components: Node.js - Library
Affects Versions: 0.12.0
Reporter: Stas Sribnyi


I have a problem while performing client requests created with Multiplexer and 
HttpConnection.

In case of some errors like 'ECONNREFUSED', 'Connection timed out' and some 
other errors http_connection *emits an error, but nothing handles it*, 
therefore promise does not reject and in fact just hangs, so we have memory 
leaks.

Looks like promise can only be rejected only with a call of clientCallback
[https://github.com/apache/thrift/blob/41f47aff7ccc1a093eb5e48250377c1178babeec/lib/nodejs/lib/thrift/http_connection.js#L140]

As far as I did not find any example of such cases (multiplexer+http 
connection+node), it is possible that I use it in the wrong way, but from the 
source code perspective it looks like a design flaw, could you please take a 
look into this issue?

Here is a simplified source code I use:
{code:java}
const multiplexer = new Multiplexer();
const connection = new HttpConnection({
  path: '/rpc',
  transport: TBufferedTransport,
  protocol: TBinaryProtocol,
  nodeOptions: {
host: '127.0.0.1',
port: 8989,
  },
});

const client = multiplexer.createClient(
  'UserClient',
  UserClient,
  connection
);

try {
  const user = await client.registerUser({
// Some props related to user
  } as any);

  // perform some actions with response
} catch (error) { // this never happens in case of connection issues and some 
internal errors of http connection
  // log error 
}
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [WM-RU] Meeting 10.12.2019

2019-11-16 Thread Stas Kozlovsky via Wikimedia-RU
В Граблях вообще ничего не слышно. Лучше в офисе

сб, 16 нояб. 2019 г., 13:56 Vladimir Medeyko via Wikimedia-RU <
wikimedia-ru@lists.wikimedia.org>:

> Здравствуйте, коллеги!
>
> Ровно в середине декабря истекает очередной двухгодичный срок моих
> полномочий.
>
> Мы обсудили сегодня на собрании и предварительно запланировали собрание на
> 10.12.2019 в 19:30 в Граблях на Пятницкой. Пожалуйста, приходите,
> обязательно нужен кворум!
> ___
> Wikimedia-RU mailing list
> Wikimedia-RU@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikimedia-ru
>
___
Wikimedia-RU mailing list
Wikimedia-RU@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikimedia-ru


Map Reduce over cache items, where values are sequences

2019-09-27 Thread Stas Girkin
Hello everyone,

I would like to use MapReduce over cache items representing events happened
in a process to calculate certain statistics. Could you be so kind to help
me how can I do that with apache ignite?

I have tens of millions of processes that happened in the past. The
processes look like a sequence of events [event1, event2, event3, ...
eventN], where number of events per process could vary (50-100). Every
event has certain sets of attributes like timestamp, event type, set of
metrics. I put these data to a cache as process_id => [e1, e2, e3, e4,
...]. What I would like to get is to get a histogram how often event of a
certain type happens in all the processes or processes that have certain
condition. What I managed to do is to broadcast a callable that lands on
ignite nodes and can access local cache items and counts what I want and
returns it back to the caller in K chunks which I have to aggregate on the
client.

Ignite localIgnite = Ignition.localIgnite();
IgniteCache localCache = localIgnite.cache("processes");
MyHistogram hist = new MyHistogram()
for (Cache.Entry e : localCache.localEntries()) {
hist.process(e.getValue());
}
return hist;

The problem with the approach is it utilizes only a single core on the
ignite node, while I have 64. How could I do something similar in more
efficient manner?

thank you in advance.


Re: [Wikidata] dcatap namespace in WDQS

2019-08-15 Thread Stas Malyshev
Hi!

> As part of our Wikidata Query Service setup, we maintain the namespace
> serving DCAT-AP (DCAT Application Profile) data[1]. (If you don't know
> what I'm talking about you can safely ignore the rest of the message).

Following up on this discussion and the feedback received, I have
decided to move dcatap namespace to separate endpoint -
https://dcatap.wmflabs.org/. I've updated the manual to reflect it[1].
The old setup is still working, but we'll be disabling updates, and
eventually also disable the namespace itself, so while it still be used
for now, if you plan to use it (logs suggest there's virtually no usage
now, but that can change of course) please use the endpoint above.

[1]
https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#DCAT-AP
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [gnome-cyr] Справка по Гимп (gimp-help)

2019-08-06 Thread Stas Solovey

Приветствую!

Предполагаю что письмо могло уйти в спам. Я сам помочь ничем не могу, 
учетками не заведую.


Что насчет справки, то я только за


06.08.2019 21:17, Juliette Tux via gnome-cyr пишет:

Добрый день!
У меня есть немного переводов пошек, которые я забрала напрямую с 
гитлаба, хотелось бы продолжить через интерфейс l10n и для начала 
скинуть уже готовое, но уже второй час жду письма по восстановлению 
пароля. Кто-нить может поспособствовать? Или лучше сделать новый 
аккаунт, но почту-то он мою признал и сказал, что ждите ответа. И 
хотелось бы в случае моего удачного востановления застолбить ВСЕ файлы 
справки — там  надо серьёзно пропалывать и доделывать. У меня есть на 
это время пока что. Спасибо заранее!


--
С уважением, Дронова Юлия

___
gnome-cyr mailing list
gnome-cyr@gnome.org
https://mail.gnome.org/mailman/listinfo/gnome-cyr
___
gnome-cyr mailing list
gnome-cyr@gnome.org
https://mail.gnome.org/mailman/listinfo/gnome-cyr


[Wikidata] dcatap namespace in WDQS

2019-07-28 Thread Stas Malyshev
Hi!

As part of our Wikidata Query Service setup, we maintain the namespace
serving DCAT-AP (DCAT Application Profile) data[1]. (If you don't know
what I'm talking about you can safely ignore the rest of the message).

Recent check showed that this namespace is virtually unused - over the
last two months, only 3 query per month were served from that namespace,
and all of them coming from WMF servers (not sure whether it's a tool or
somebody querying manually, did not dig further).

So I wonder if it makes sense to continue maintaining this namespace?
While it does not require very significant effort - it's mostly
automated - it does need occasional attention when maintenance is
performed, and some scripts and configurations become slightly more
complex because of it. No big deal if somebody is using it, that's what
the service is for, but if it is completely unused, no point is spending
even minimal effort on it, at least on main production servers (of
course, it'd be possible to set up a simple SPARQL server in labs with
the same data).

In any case, RDF dcatap data will be available in
https://dumps.wikimedia.org/wikidatawiki/entities/dcatap.rdf, no change
is planned there, but if the namespace is phased out, the data could no
longer be queried using WDQS. One could still download it and, since
it's a very small dataset, use any tool that can read RDF to parse it
and work with it.

I'd like to hear from anybody interested in this whether they are using
this namespace or plan to use it and what for. Please either answer here
or even better in the task[2] on Phabricator.

[1]
https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#DCAT-AP
[2] https://phabricator.wikimedia.org/T228297
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Wikidata Query Service User-Agent requirements for script users

2019-07-23 Thread Stas Malyshev
Hi!

> Forgive my ignorance. I don't know much about infrastructure of WDQS and
> how it works. I just want to mention how application servers do it. In
> appservers, there are dedicated nodes both for apache and the replica
> database. So if a bot overdo things in Wikipedia (which happens quite a
> lot), users won't feel anything but the other bots take the hit. Routing
> based on UA seems hard though while it's easy in mediawiki (if you hit
> api.php, we assume it's a bot).

We have two clusters - public and internal, with the latter serving only
Wikimedia tasks thus isolated from outside traffic. However, we do not
have a practical way right now to separate bot and non-bot traffic, and
I don't think we now have resources for another cluster.

> Routing based on UA seems hard though while it's easy in mediawiki

I don't think our current LB setup can route based on user agent. There
could be a gateway that does that, but given that we don't have
resources for another cluster for now, it's not too useful to spend time
on developing something like that for now.

Even if we did separate browser and bot traffic, we'd still have the
problem on bot cluster - most bots are benign and low-traffic, and we
want to do our best to enable them to function smoothly. But for this to
work, we need ways to weed out outliners that consume too much
resources. In a way, the bucketing policy is a sort of version of what
you described - if you use proper identification, you are judged on your
traffic. If you use generic identification, you are bucketed with other
generic agents, and thus may be denied if that bucket is full. This is
not the best final solution, but experience so far shows it reduced the
incidence of problems. Further ideas on how to improve it of course are
welcome.

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Wikidata Query Service User-Agent requirements for script users

2019-07-23 Thread Stas Malyshev
Hello all!

Here is (at last!) an update on what we are doing to protect the
stability of Wikidata Query Service.

For 4 years we have been offering to Wikidata users the Query Service, a
powerful tool that allows anyone to query the content of Wikidata,
without any identification needed. This means that anyone can use the
service using a script and make heavy or very frequent requests.
However, this freedom has led to the service being overloaded by a too
big amount of queries, causing the issues or lag that you may have noticed.

A reminder about the context:

We have had a number of incidents where the public WDQS endpoint was
overloaded by bot traffic. We don't think that any of that activity was
intentionally malicious, but rather that the bot authors most probably
don't understand the cost of their queries and the impact they have on
our infrastructure. We've recently seen more distributed bots, coming
from multiple IPs from cloud providers. This kind of pattern makes it
harder and harder to filter or throttle an individual bot. The impact
has ranged from increased update lag to full service interruption.

What we have been doing:

While we would love to allow anyone to run any query they want at any
time, we're not able to sustain that load, and we need to be more
aggressive in how we throttle clients. We want to be fair to our users
and allow everyone to use the service productively. We also want the
service to be available to the casual user and provide up-to-date access
to the live Wikidata data. And while we would love to throttle only
abusive bots, to be able to do that we need to be able to identify them.

We have two main means of identifying bots:

1) their user agent and IP address
2) the pattern of their queries

Identifying patterns in queries is done manually, by a person inspecting
the logs. It takes time and can only be done after the fact. We can only
start our identification process once the service is already overloaded.
This is not going to scale.

IP addresses are starting to be problematic. We see bots running on
cloud providers and running their workloads on multiple instances, with
multiple IP addresses.

We are left with user agents. But here, we have a problem again. To
block only abusive bots, we would need those bots to use a clearly
identifiable user agent, so that we can throttle or block them and
contact the author to work together on a solution. It is unlikely that
an intentionally abusive bot will voluntarily provide a way to be
blocked. So we need to be more aggressive about bots which are using a
generic user agent. We are not blocking those, but we are limiting the
number of requests coming from generic user agents. This is a large
bucket, with a lot of bots that are in this same category of "generic
user agent". Sadly, this is also the bucket that contains many small
bots that generate only a very reasonable load. And so we are also
impacting the bots that play fair.

At the moment, if your bot is affected by our restrictions, configure a
custom user agent that identifies you; this should be sufficient to give
you enough bandwidth. If you are still running into issues, please
contact us; we'll find a solution together.

What's coming next:

First, it is unlikely that we will be able to remove the current
restrictions in the short term. We're sorry for that, but the
alternative - service being unresponsive or severely lagged for everyone
- is worse.

We are exploring a number of alternatives. Adding authentication to the
service, and allowing higher quotas to bots that authenticate. Creating
an asynchronous queue, which could allow running more expensive queries,
but with longer deadlines. And we are in the process of hiring another
engineer to work on these ideas.

Thanks for your patience!

WDQS Team

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: Add client connection check during the execution of the query

2019-07-05 Thread Stas Kelvich



> On 5 Jul 2019, at 11:46, Thomas Munro  wrote:
> 
> On Fri, Jul 5, 2019 at 6:28 PM Tatsuo Ishii  wrote:
>>> The purpose of this patch is to stop the execution of continuous
>>> requests in case of a disconnection from the client.
>> 
>> Pgpool-II already does this by sending a parameter status message to
>> the client. It is expected that clients are always prepared to receive
>> the parameter status message. This way I believe we could reliably
>> detect that the connection to the client is broken or not.
> 
> Hmm.  If you send a message, it's basically application-level
> keepalive.  But it's a lot harder to be sure that the protocol and
> socket are in the right state to insert a message at every possible
> CHECK_FOR_INTERRUPT() location.  Sergey's proposal of recv(MSG_PEEK)
> doesn't require any knowledge of the protocol at all, though it
> probably does need TCP keepalive to be configured to be useful for
> remote connections.


Well, indeed in case of cable disconnect only way to detect it with
proposed approach is to have tcp keepalive. However if disconnection
happens due to client application shutdown then client OS should itself
properly close than connection and therefore this patch will detect
such situation without keepalives configured.

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company






Re: [Xmldatadumps-l] Wikidate Entitites 24/06/2019 dump missing

2019-07-01 Thread Stas Malyshev
Hi!

> We have been using the Wikidata Entities data dump for quite a while,
> but the last two weeks we have been having an issue where the data dump
> archive has disappeared from the website, or it has not been there at all.
> 
> I mean here: https://dumps.wikimedia.org/other/wikidata/
> 
> 20190624.json.gz returns a File Not Found.

The dump for that week was not produced due to an error. Please wait for
the next week's dump which should happen quite soon as I understand.

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Xmldatadumps-l mailing list
Xmldatadumps-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/xmldatadumps-l


Re: [Wikidata] Significant change of Wikidata dump size

2019-06-26 Thread Stas Malyshev
Hi!

On 6/25/19 11:17 PM, Ariel Glenn WMF wrote:
> I think the issue is with the 0624 json dumps, which do seem a lot
> smaller than previous weeks' runs.

Ah, true, I didn't realize that. I think this may be because of that
dumpJson.php issue, which is now fixed. Maybe rerun the dump?

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Significant change of Wikidata dump size

2019-06-25 Thread Stas Malyshev
Hi!

> Which script, please, and which dump? (The conversation was not
> forwarded so I don't have the context.)

Sorry, the original complaint was:

> I apologize if I missed something, but why the current JSON dump size
is ~25GB while a week ago it was ~58GB? (see
https://dumps.wikimedia.org/wikidatawiki/entities/20190617/)

But looking at it now, I see wikidata-20190617-all.json.gz  is
comparable with the last week, so looks like it's fine now?

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Significant change of Wikidata dump size

2019-06-25 Thread Stas Malyshev
Hi!

> Follow-up: according to my processing script, this dump contains
> only 30280591 entries, while the main page is still advertising 57M+
> data items.
> Isn't it a bug in the dump process?

There was a problem with dump script (since fixed), so the dump may
indeed be broken. CCing Ariel to take a look. Probably needs to be
re-run or we can just wait for the next one.

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Result format change for WDQS JSON query output

2019-06-21 Thread Stas Malyshev
Hi!

> from 2014, so I will research which form is more correct. But for now I
> would recommend to update the tools to recognize that these literals now
> may have type. If I discover that the standards or accepted practices
> recommend otherwise, I'll update further. You can also watch
> https://phabricator.wikimedia.org/T225996 for final resolution of this.

I surveyed existing practices of SPARQL endpoints and tools, and looks
like the accepted practice is to omit the datatypes for such literals
even within the context of RDF 1.1. Example:
https://issues.apache.org/jira/browse/JENA-1077
I will adjust the code in Blazegraph accordingly, so WDQS will comply
with this practice (i.e. result format will be as it was before). This
will be implemented in coming days.
Sorry again for the disruption.
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Result format change for WDQS JSON query output

2019-06-20 Thread Stas Malyshev
Hi!

Due to upgrade to more current version of Sesame toolkit, the format of
JSON output of Wikidata Query Service has changed slightly[1]. The
change is that plain literals (ones that do not have explicit data type,
like "string" or "string"@de) now have "datatype" field. The language
literals will have type
http://www.w3.org/1999/02/22-rdf-syntax-ns#langString and the
non-language ones http://www.w3.org/2001/XMLSchema#string. This is in
accordance with RDF 1.1 standard [2], where all literals have data type
(even though for these types it is implicit).

I apologize for not noting this in advance - though I knew this change
in the standard happened, I did not foresee it will also carry over to
the JSON output format. I am not sure yet which output form is actually
correct, since standards seem to be conflicting, maybe due to the fact
that JSON results standard hasn't been updated since 2013 and RDF 1.1 is
from 2014, so I will research which form is more correct. But for now I
would recommend to update the tools to recognize that these literals now
may have type. If I discover that the standards or accepted practices
recommend otherwise, I'll update further. You can also watch
https://phabricator.wikimedia.org/T225996 for final resolution of this.

[1] https://phabricator.wikimedia.org/T225996
[2] https://www.w3.org/TR/rdf11-concepts/#section-Graph-Literal
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Planned filename change for Wikidata RDF entity dumps

2019-06-20 Thread Stas Malyshev
Hi!

As outlined in https://phabricator.wikimedia.org/T226153, we are
planning to change filename scheme for Wikidata RDF entity dumps, by
removing the "-BETA" suffix from the filename. The Wikidata RDF ontology
is not beta anymore and dumps have been working stable for a while now,
so it's time to drop the beta mark from the name. It may take a week or
two for the change to propagate and be applied to dumps, but if your
tools depend on exact naming, please prepare them for the eventual
change in the name.

Note that links like
https://dumps.wikimedia.org/wikidatawiki/entities/latest-all.nt.gz would
still be pointing to the right files, and if all you care is downloading
the latest dump, using these links is always recommended.

We will send another message once the change has been implemented and
deployed.

Thanks,
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Xmldatadumps-l] Planned filename change for Wikidata RDF entity dumps

2019-06-20 Thread Stas Malyshev
Hi!

As outlined in https://phabricator.wikimedia.org/T226153, we are
planning to change filename scheme for Wikidata RDF entity dumps, by
removing the "-BETA" suffix from the filename. The Wikidata RDF ontology
is not beta anymore and dumps have been working stable for a while now,
so it's time to drop the beta mark from the name. It may take a week or
two for the change to propagate and be applied to dumps, but if your
tools depend on exact naming, please prepare them for the eventual
change in the name.

Note that links like
https://dumps.wikimedia.org/wikidatawiki/entities/latest-all.nt.gz would
still be pointing to the right files, and if all you care is downloading
the latest dump, using these links is always recommended.

We will send another message once the change has been implemented and
deployed.

Thanks,
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Xmldatadumps-l mailing list
Xmldatadumps-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/xmldatadumps-l


Re: [Wikidata] Overload of query.wikidata.org (Guillaume Lederrey)

2019-06-18 Thread Stas Malyshev
Hi!

On 6/18/19 2:29 PM, Tim Finin wrote:
> I've been using wdtaxonomy
> <https://wdtaxonomy.readthedocs.io/en/latest/> happily for many months
> on my macbook. Starting yesterday, every call I make (e.g., "wdtaxonomy
> -c Q5") produces an immediate "SPARQL request failed" message.

Could you provide more details, which query is sent and what is the full
response (including HTTP code)?

> 
> Might these requests be blocked now because of the new WDQS policies?

One thing I may think of it that this tool does not send the proper
User-Agent header. According to
https://meta.wikimedia.org/wiki/User-Agent_policy, all clients should
identify with valid user agent. We've started enforcing it recently, so
maybe this tool has this issue. If not, please provide the data above.

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Scaling Wikidata Query Service

2019-06-17 Thread Stas Malyshev
Hi!

> The documented limits about FDB states that it to support up to 100TB of
> data
> <https://apple.github.io/foundationdb/known-limitations.html#database-size>.
> That is 100x times more
> than what WDQS needs at the moment.

"Support" is such a multi-faceted word. It can mean "it works very well
with such amount of data and is faster than the alternatives" or "it is
guaranteed not to break up to this number but breaks after it" or "it
would work, given massive amounts of memory and super-fast hardware and
very specific set of queries, but you'd really have to take an effort to
make it work" and everything in between. The devil is always in the
details, which this seemingly simple word "supports" is rife with.

> I am offering my full-time services, it is up to you decide what will
> happen.

I wish you luck with the grant, though I personally think if you expect
to have a production-ready service in 6 month that can replace WDQS then
in my personal opinion it is a bit too optimistic. I might be completely
wrong on this of course. If you just plan to load the Wikidata data set
and evaluate the queries to ensure they are fast and produce proper
results on the setup you propose, then it can be done. Good luck!
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Overload of query.wikidata.org

2019-06-17 Thread Stas Malyshev
Hi!

> We are currently dealing with a bot overloading the Wikidata Query
> Service. This bot does not look actively malicious, but does create
> enough load to disrupt the service. As a stop gap measure, we had to
> deny access to all bots using python-request user agent.
> 
> As a reminder, any bot should use a user agent that allows to identify
> it [1]. If you have trouble accessing WDQS, please check that you are
> following those guidelines.

To add to this, we have had this trouble because two events that WDQS
currently does not deal well with have coincided:

1. An edit bot that edited with 200+ edits per minute. This is too much.
Over 60/m is really almost always too much. And also it would be a good
thing to consider if your bots does multiple changes (e.g. adds multiple
statements) doing it in one call instead of several, since WDQS
currently will do an update on each change separately, and this may be
expensive. We're looking into various improvements to this, but it is
the state currently.

2. Several bots have been flooding the service query endpoint with
requests. There is recently a growth in bots that a) completely ignore
both regular limits and throttling hints b) do not have proper
identifying user agent and c) use distributed hosts so our throttling
system has a problem to deal with them automatically. We intend to crack
down more and more on such clients, because they look a lot like DDOS
and ruin the service experience for everyone.

I will write down more detailed rules probably a bit later, but so far
these:
https://www.mediawiki.org/wiki/Wikidata_Query_Service/Implementation#Usage_constraints
and additionally having distinct User-Agent if you're running a bot is a
good idea.

And for people who are thinking it's a good idea to launch a
max-requests-I-can-stuff-into-the-pipe bot, put it on several Amazon
machines so that throttling has hard time detecting it, and then when
throttling does detect it neglecting to check for a week that all the
bot is doing is fetching 403s from the service and wasting everybody's
time - please think again. If you want to do something non-trivial
querying WDQS and limits get in the way - please talk to us (and if you
know somebody who isn't reading this list but is considering wiring a
bot interfacing with WDQS - please educate them and refer them for help,
we really prefer to help than to ban). Otherwise, we'd be forced to put
more limitations on it that will affect everyone.

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: Re: What do you think about a C# 6 like nameof() expression for

2019-06-14 Thread Stas Berkov
guest271314, what is you point against `nameof` feature?

If you don't like it - don't use it. Why prohibit this feature for
those who find it beneficial?

I see `nameof` beneficial in following cases

Case 1. Function guard.
```
function func1(options) {
...
   if (options.userName == undefined) {
   throw new ParamNullError(nameof options.userName); //
`ParamNullError` is a custom error, derived from `Error`, composes
error message like "Parameter cannot be null: userName".
 // `Object.keys({options.userName})[0]` will not work here
   }
}
```

Case 2. Accessing property extended info
Those ES functions that accept field name as string.
e.g.
```
const descriptor1 = Object.getOwnPropertyDescriptor(object1, 'property1');
```
vs
```
const descriptor1 = Object.getOwnPropertyDescriptor(object1, nameof
object1.property1);
 // `Object.keys({options1.property1})[0]` will not work here
```
2nd variant (proposed) has more chances not to be broken during
refactoring (robustness).

It would make devs who use IDE more productive and make their life
easier. Why not give them such possiblity and make them happy?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: What do you think about a C# 6 like nameof() expression for

2019-06-14 Thread Stas Berkov
> Is Case 1 equivalent to a briefer version of
> ```
   if (userName == undefined) {
   throw new Error(`Argument cannot be null:
${Object.keys({userName})[0]}`);
   }
```
Less readable but in this simple case might work.
What if we do the following:
Case 1. Function guard.
```
function func1(options) {
...
   if (options.userName == undefined) {
   throw new ParamNullError(nameof options.userName); //
`ParamNullError` is a custom error, derived from `Error`, composes error
message like "Argument cannot be null: userName".
   }
}
```

Case 2. Accessing property extended info
e.g.
```
const descriptor1 = Object.getOwnPropertyDescriptor(object1, 'property1');
```
vs
```
const descriptor1 = Object.getOwnPropertyDescriptor(object1, nameof
object1.property1);
```
2nd variant (proposed) has more chances not to be broken during refactoring
(robustness).

On Fri, Jun 14, 2019 at 9:48 PM Stas Berkov  wrote:

> Less fragile. Less mess. You can rename field/property without fear you
> break something (using IDE refactoring tools).
> With high probablity you will break something when you refactor and have
> fields hardcoded as strings.
> Someone can object that you can rename strings as well.
> Issue here that you can ocassionally change non-related strings that
> should not be changed even they match or have matching substring.
>
> On Fri, Jun 14, 2019 at 9:38 PM guest271314  wrote:
>
>> Is Case 1 equivalent to a briefer version of
>>
>> ```
>>if (userName == undefined) {
>>throw new Error(`Argument cannot be null:
>> ${Object.keys({userName})[0]}`);
>>}
>> ```
>>
>> ?
>>
>> If not, how is ```nameof``` different?
>>
>> What is the difference between the use of 
>> ```message.hasOwnProperty(property)```
>> and ```nameof msg.expiration_utc_time```?
>>
>> > You get more robust code.
>>
>> How is "robust" objectively determined?
>>
>>
>>
>>
>> On Fri, Jun 14, 2019 at 5:21 PM Stas Berkov 
>> wrote:
>>
>>> ES can befit from `nameof` feature the same way as TS. There is no TS
>>> specific in it.
>>> It was ask to introduce in TS as a workaround since TS is considered as
>>> extention of ES.
>>>
>>> Case 1. Function guard.
>>> ```
>>> function func1(param1, param2, param3, userName, param4, param5) {
>>>if (userName == undefined) {
>>>throw new ArgumentNullError(nameof userName); //
>>> `ArgumentNullError` is a custom error, derived from `Error`, composes error
>>> message like "Argument cannot be null: userName".
>>>}
>>> }
>>> ```
>>>
>>> Case 2. Access extended information an object property.
>>> Assume a function
>>> ```
>>> function protoPropertyIsSet(message, property) {
>>> return message != null && message.hasOwnProperty(property);
>>> }
>>> ```
>>> Then in code you use it as `if (protoPropertyIsSet(msg,
>>> "expiration_utc_time")) {... }`.
>>> Having `nameof` would allow you to do that `if (protoPropertyIsSet(msg,
>>> nameof msg.expiration_utc_time)) {... }`.
>>> You get more robust code.
>>>
>>> On Fri, Jun 14, 2019 at 5:46 PM Augusto Moura 
>>> wrote:
>>>
>>>> Can you list the benefits of having this operators? Maybe with example
>>>> use cases
>>>>
>>>> If I understand it correctly, the operator fits better in compiled
>>>> (and typed) languages, most of the use cases don't apply to dynamic
>>>> Javascript
>>>> The only legit use case I can think of is helping refactor tools to
>>>> rename properties (but even mismatch errors between strings and
>>>> properties names can be caught in compile time using modern
>>>> Typescript)
>>>>
>>>> Em sex, 14 de jun de 2019 às 10:05, Stas Berkov
>>>>  escreveu:
>>>> >
>>>> > Can we revisit this issue?
>>>> >
>>>> >
>>>> > In C# there is `nameof`, in Swift you can do the same by calling
>>>> >
>>>> > ```
>>>> >
>>>> > let keyPath = \Person.mother.firstName
>>>> >
>>>> > NSPredicate(format: "%K == %@", keyPath, "Andrew")
>>>> >
>>>> > ```
>>>> >
>>>> > Let's introduce `nameof` in ES, please.
>>>> >
>>>> >
>>>> > Devs from TypeScrip

Re: Re: What do you think about a C# 6 like nameof() expression for

2019-06-14 Thread Stas Berkov
Less fragile. Less mess. You can rename field/property without fear you
break something (using IDE refactoring tools).
With high probablity you will break something when you refactor and have
fields hardcoded as strings.
Someone can object that you can rename strings as well.
Issue here that you can ocassionally change non-related strings that should
not be changed even they match or have matching substring.

On Fri, Jun 14, 2019 at 9:38 PM guest271314  wrote:

> Is Case 1 equivalent to a briefer version of
>
> ```
>if (userName == undefined) {
>throw new Error(`Argument cannot be null:
> ${Object.keys({userName})[0]}`);
>}
> ```
>
> ?
>
> If not, how is ```nameof``` different?
>
> What is the difference between the use of 
> ```message.hasOwnProperty(property)```
> and ```nameof msg.expiration_utc_time```?
>
> > You get more robust code.
>
> How is "robust" objectively determined?
>
>
>
>
> On Fri, Jun 14, 2019 at 5:21 PM Stas Berkov  wrote:
>
>> ES can befit from `nameof` feature the same way as TS. There is no TS
>> specific in it.
>> It was ask to introduce in TS as a workaround since TS is considered as
>> extention of ES.
>>
>> Case 1. Function guard.
>> ```
>> function func1(param1, param2, param3, userName, param4, param5) {
>>if (userName == undefined) {
>>throw new ArgumentNullError(nameof userName); //
>> `ArgumentNullError` is a custom error, derived from `Error`, composes error
>> message like "Argument cannot be null: userName".
>>}
>> }
>> ```
>>
>> Case 2. Access extended information an object property.
>> Assume a function
>> ```
>> function protoPropertyIsSet(message, property) {
>> return message != null && message.hasOwnProperty(property);
>> }
>> ```
>> Then in code you use it as `if (protoPropertyIsSet(msg,
>> "expiration_utc_time")) {... }`.
>> Having `nameof` would allow you to do that `if (protoPropertyIsSet(msg,
>> nameof msg.expiration_utc_time)) {... }`.
>> You get more robust code.
>>
>> On Fri, Jun 14, 2019 at 5:46 PM Augusto Moura 
>> wrote:
>>
>>> Can you list the benefits of having this operators? Maybe with example
>>> use cases
>>>
>>> If I understand it correctly, the operator fits better in compiled
>>> (and typed) languages, most of the use cases don't apply to dynamic
>>> Javascript
>>> The only legit use case I can think of is helping refactor tools to
>>> rename properties (but even mismatch errors between strings and
>>> properties names can be caught in compile time using modern
>>> Typescript)
>>>
>>> Em sex, 14 de jun de 2019 às 10:05, Stas Berkov
>>>  escreveu:
>>> >
>>> > Can we revisit this issue?
>>> >
>>> >
>>> > In C# there is `nameof`, in Swift you can do the same by calling
>>> >
>>> > ```
>>> >
>>> > let keyPath = \Person.mother.firstName
>>> >
>>> > NSPredicate(format: "%K == %@", keyPath, "Andrew")
>>> >
>>> > ```
>>> >
>>> > Let's introduce `nameof` in ES, please.
>>> >
>>> >
>>> > Devs from TypeScript don't want to introduce this feature in
>>> TypeScript unless it is available in ES (
>>> https://github.com/microsoft/TypeScript/issues/1579 )
>>> >
>>> > This feature is eagarly being asked by TypeScript community.
>>> >
>>> >
>>> > I understand there are couple issues related to `nameof` feature in
>>> ES. They are: minification and what to do if user already has `nameof`
>>> function.
>>> >
>>> >
>>> > Minification.
>>> >
>>> > 1. If your code to be minimized be prepared that variable names will
>>> also change.
>>> >
>>> > 2. (just a possibility) Minimizer can have option to replace
>>> `nameof(someVar)` with result of `nameof` function.
>>> >
>>> >
>>> >
>>> > What if user already has `nameof` function.
>>> >
>>> > 1. To maintain status quo we can user `nameof` function having
>>> priority over newly introduced language feature.
>>> >
>>> > 2. OR we can use `typeof` syntax, e.g. `nameof msg.userName` (//
>>> returns "userName" string)
>>> >
>>> > ___
>>> > es-discuss mailing list
>>> > es-discuss@mozilla.org
>>> > https://mail.mozilla.org/listinfo/es-discuss
>>>
>>>
>>>
>>> --
>>> Atenciosamente,
>>>
>>> Augusto Borges de Moura
>>>
>> ___
>> es-discuss mailing list
>> es-discuss@mozilla.org
>> https://mail.mozilla.org/listinfo/es-discuss
>>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: What do you think about a C# 6 like nameof() expression for

2019-06-14 Thread Stas Berkov
ES can befit from `nameof` feature the same way as TS. There is no TS
specific in it.
It was ask to introduce in TS as a workaround since TS is considered as
extention of ES.

Case 1. Function guard.
```
function func1(param1, param2, param3, userName, param4, param5) {
   if (userName == undefined) {
   throw new ArgumentNullError(nameof userName); // `ArgumentNullError`
is a custom error, derived from `Error`, composes error message like
"Argument cannot be null: userName".
   }
}
```

Case 2. Access extended information an object property.
Assume a function
```
function protoPropertyIsSet(message, property) {
return message != null && message.hasOwnProperty(property);
}
```
Then in code you use it as `if (protoPropertyIsSet(msg,
"expiration_utc_time")) {... }`.
Having `nameof` would allow you to do that `if (protoPropertyIsSet(msg,
nameof msg.expiration_utc_time)) {... }`.
You get more robust code.

On Fri, Jun 14, 2019 at 5:46 PM Augusto Moura 
wrote:

> Can you list the benefits of having this operators? Maybe with example use
> cases
>
> If I understand it correctly, the operator fits better in compiled
> (and typed) languages, most of the use cases don't apply to dynamic
> Javascript
> The only legit use case I can think of is helping refactor tools to
> rename properties (but even mismatch errors between strings and
> properties names can be caught in compile time using modern
> Typescript)
>
> Em sex, 14 de jun de 2019 às 10:05, Stas Berkov
>  escreveu:
> >
> > Can we revisit this issue?
> >
> >
> > In C# there is `nameof`, in Swift you can do the same by calling
> >
> > ```
> >
> > let keyPath = \Person.mother.firstName
> >
> > NSPredicate(format: "%K == %@", keyPath, "Andrew")
> >
> > ```
> >
> > Let's introduce `nameof` in ES, please.
> >
> >
> > Devs from TypeScript don't want to introduce this feature in TypeScript
> unless it is available in ES (
> https://github.com/microsoft/TypeScript/issues/1579 )
> >
> > This feature is eagarly being asked by TypeScript community.
> >
> >
> > I understand there are couple issues related to `nameof` feature in ES.
> They are: minification and what to do if user already has `nameof` function.
> >
> >
> > Minification.
> >
> > 1. If your code to be minimized be prepared that variable names will
> also change.
> >
> > 2. (just a possibility) Minimizer can have option to replace
> `nameof(someVar)` with result of `nameof` function.
> >
> >
> >
> > What if user already has `nameof` function.
> >
> > 1. To maintain status quo we can user `nameof` function having priority
> over newly introduced language feature.
> >
> > 2. OR we can use `typeof` syntax, e.g. `nameof msg.userName` (// returns
> "userName" string)
> >
> > ___
> > es-discuss mailing list
> > es-discuss@mozilla.org
> > https://mail.mozilla.org/listinfo/es-discuss
>
>
>
> --
> Atenciosamente,
>
> Augusto Borges de Moura
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: What do you think about a C# 6 like nameof() expression for

2019-06-14 Thread Stas Berkov
Can we revisit this issue?


In C# there is `nameof`, in Swift you can do the same by calling

```

let keyPath = \Person.mother.firstName

NSPredicate(format: "%K == %@", keyPath, "Andrew")

```

Let's introduce `nameof` in ES, please.


Devs from TypeScript don't want to introduce this feature in TypeScript
unless it is available in ES (
https://github.com/microsoft/TypeScript/issues/1579 )

This feature is eagarly being asked by TypeScript community.


I understand there are couple issues related to `nameof` feature in ES.
They are: minification and what to do if user already has `nameof` function.


Minification.

1. If your code to be minimized be prepared that variable names will also
change.

2. (just a possibility) Minimizer can have option to replace
`nameof(someVar)` with result of `nameof` function.



What if user already has `nameof` function.

1. To maintain status quo we can user `nameof` function having priority
over newly introduced language feature.

2. OR we can use `typeof` syntax, e.g. `nameof msg.userName` (// returns
"userName" string)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [Wikidata] Scaling Wikidata Query Service

2019-06-13 Thread Stas Malyshev
Hi!

> Data living in an RDBMS engine distinct from Virtuoso is handled via the
> engines Virtual Database module i.e., you can build powerful RDF Views
> over ODBC- or JDBC- accessible data using Virtuoso. These view also have
> the option of being materialized etc..

Yes, but the way the data are stored now is JSON blob within a text
field in MySQL. I do not see how RDF View over ODBC would help it any -
of course Virtuoso would be able to fetch JSON text for a single item,
but then what? We'd need to run queries across millions of items,
fetching and parsing JSON for every one of them every time is
unfeasible. Not to mention this JSON is not an accurate representation
of the RDF data model. So I don't think it is worth spending time in
this direction... I just don't see how any query engine could work with
that storage.
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Scaling Wikidata Query Service

2019-06-13 Thread Stas Malyshev
Hi!

> It handles data locality across a shared nothing cluster just fine i.e.,
> you can interact with any node in a Virtuoso cluster and experience
> identical behavior (everyone node looks like single node in the eyes of
> the operator).

Does this mean no sharding, i.e. each server stores the full DB? This is
the model we're using currently, but given the growth of the data it may
be non sustainable on current hardware. I see in your tables that
Uniprot has about 30B triples, but I wonder how update loads there look
like. Our main issue is that the hardware we have now is showing its
limits when there's a lot of updates in parallel to significant query
load. So I wonder if the "single server holds everything" model is
sustainable in the long term.

> There are live instances of Virtuoso that demonstrate its capabilities.
> If you want to explore shared-nothing cluster capabilities then our live
> LOD Cloud cache is the place to start [1][2][3]. If you want to see the
> single-server open source edition that you have DBpedia, DBpedia-Live,
> Uniprot and many other nodes in the LOD Cloud to choose from. All of
> these instance are highly connected.

Again, here the question is not too much in "can you load 7bn triples
into Virtuoso" - we know we can. What we want to figure out whether
given specific query/update patterns we have now - it is going to give
us significantly better performance allowing to support our projected
growth.
And also possibly whether Virtuoso has ways to make our update workflow
be more optimal - e.g. right now if one triple changes in Wikidata item,
we're essentially downloading and updating the whole item (not exactly
since triples that stay the same are preserved but it requires a lot of
data transfer to express that in SPARQL). Would there be ways to update
the things more efficiently?

> Virtuoso handles both shared-nothing clusters and replication i.e., you
> can have a cluster configuration used in conjunction with a replication
> topology if your solution requires that.

Replication could certainly be useful I think it it's faster to update
single server and then replicate than simultaneously update all servers
(that's what is happening now).

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Scaling Wikidata Query Service

2019-06-13 Thread Stas Malyshev
Hi!

> Unlike, most sites we do have our own custom frontend in front of
> virtuoso. We did this to allow more styling, as well as being flexible
> and change implementations at our whim. e.g. we double parse the SPARQL
> queries and even rewrite some to be friendlier. I suggest you do the
> same no matter which DB you use in the end, and we would be willing to
> open source ours (it is in Java, and uses RDF4J and some ugly JSPX but
> it works, if not to use at least as an inspiration). We did this to
> avoid being locked into endpoint specific features.

It would be interesting to know more about this, if this is open source.
Is there any more information about it online?

> Pragmatically, while WDS is a Graph database, the queries are actually
> very relational. And none of the standard graph algorithms are used. To

If you mean algorithms like A* or PageRank, then yes, they are not used
too much (likely also because SPARQL has no standard support for any of
these, too), though Blazegraph implements some of them as custom services.

> be honest RDF is actually a relational system which means that
> relational techniques are very good at answering them. The sole issue is
> recursive queries (e.g. rdfs:subClassOf+) in which the virtuoso
> implementation is adequate but not great.

Yes, path queries are pretty popular on WDQS too, especially given as
many relationships like administrative/territorial placement or
ownership are hierarchical and transitive, which often requires path
queries.

> This is why recovering physical schemata from RDF data is such a
> powerful optimization technique [1]. i.e. you tend to do joins not
> traversals. This is not always true but I strongly suspect it will hold
> for the vast majority of the Wikidata Query Service case.

Would be interesting to see if we can apply anything from the article.
Thanks for the link!

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Kicad-developers] running multiple versions of KiCad on macOS

2019-06-12 Thread Seppe Stas
Andy, I don’t think Adam is talking about the library folders, those can
already be easily set using environmental arguments.

On that note though, being able to have those environmental variables be
updated depending on the KiCad version would also be a nice to have, since
for some reason KiCad does not seem to pick up on env vars set by launchctl
on MacOS, making easy switching only available when launching KiCAD from
the command line (which also seems to come with a bunch of bugs seemingly
related to OpenGL).

Right now I have some custom scripts that set up the correct env vars and
symlinks the library table files, it works okish.

I also noted KiCad 4 currently complains on launh because it doesn’t
understand some settings set by KiCAD 5. Everything still works fine though.

Seppe

On Thu, 13 Jun 2019 at 00:20, Andy Peters  wrote:

>
> > On Jun 12, 2019, at 2:38 PM, Adam Wolf 
> wrote:
> >
> > Seeing Seppe's patch made me think of something I tried to do last
> > time, but ended up running out of time.
> >
> > What do folks think about changing the data directory for macOS to
> > have the major version, to make it a little easier to run KiCad 5 and
> > 6 on the same computer?  Am I opening a can of worms?
>
> “data directory” as in where the libraries etc are stored?
>
> Currently in ~/Library/Application Support/kicad or /Library/Application
> Support/kicad ??
>
> Possibly changed to ~/Library/Application Support/kicad 5/ and
> ~/Library/Application Support/kicad 6/ for example?
>
> I don’t have a problem with it, but the question is how to manage it.
> Assume that everyone who already has Kicad installed is using the “default”
> location. When the user upgrades to a new major version, perhaps on first
> run it should ask about the library locations.
>
> But there’s a complication … do those locations exist?
>
> As part of that first run, should the new version offer to upgrade
> existing libraries and store them in the new location?
>
> (And what does that do to users who keep libraries in source-code control
> and the libraries on the computers live in working copies?)
>
> I mean, I agree that if the intent is to be able to run Kicad 5 and Kicad
> 6 on the same machine and those versions have incompatible libraries then
> yes, we need to be able to tell those two installs where their libraries
> live.
>
> Of course, if Kicad 6 can use Kicad 5’s libraries as is, then there is no
> need for the distinction.
>
> Yes, can of worms indeed.
>
> -a
> ___
> Mailing list: https://launchpad.net/~kicad-developers
> Post to : kicad-developers@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~kicad-developers
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~kicad-developers
Post to : kicad-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kicad-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Kicad-developers] [PATCH] Set KiCad version in MacOS apps

2019-06-12 Thread Seppe Stas
Awesome, thanks.

Could you release it for KiCAD 5 so I can use it for the upcoming KiCAD 5
to 6 migration?

Seppe

On Wed, 12 Jun 2019 at 13:43, Seth Hillbrand  wrote:

> Hi Seppe-
>
> I gave it a quick run and looks good.  I pushed your patch with the
> addition of @KICAD_VERSION_FULL@ to the CFBundleVersion extended version
> string.
>
> Thank you for your contribution to KiCad!
> -Seth
>
> On 2019-06-11 21:50, Adam Wolf wrote:
> > I did not test it, but reading over it, it looks great.  Thanks!
> >
> > Adam
> >
> > On Tue, Jun 11, 2019 at 1:41 PM Adam Wolf
> >  wrote:
> >>
> >> I will review today.
> >>
> >> Thanks for your help, Seppe!
> >>
> >> On Tue, Jun 11, 2019, 10:11 AM Wayne Stambaugh 
> >> wrote:
> >>>
> >>> Seppe,
> >>>
> >>> Your patch looks good to me.  Any MacOS devs care to comment?
> >>>
> >>> Cheers,
> >>>
> >>> Wayne
> >>>
> >>> On 6/11/19 10:46 AM, Seppe Stas wrote:
> >>> > Hey Wayne
> >>> >
> >>> > I attached my patch (generated with `git format-patch --attach
> >>> > origin/master`) to my last email as
> >>> > per
> http://www.kicad-pcb.org/contribute/developers/#_submitting_patches.
> >>> > I have a feeling Gmail might not like the mail headers in the patch.
> >>> >
> >>> > I created a new patch without the --attach option and added it to
> this
> >>> > email (I am more used to this patch format and I believe it worked in
> >>> > the past).
> >>> >
> >>> > Greetings
> >>> > Seppe
> >>> >
> >>> > On Tue, Jun 11, 2019 at 4:27 PM Wayne Stambaugh <
> stambau...@gmail.com
> >>> > <mailto:stambau...@gmail.com>> wrote:
> >>> >
> >>> > Seppe,
> >>> >
> >>> > I don't understand why your emails keep ending up on the
> moderated list
> >>> > but something strange is going on.  I had to moderate this one
> as well.
> >>> >  Please attach your patch (created using `git format-patch`) so
> it can
> >>> > be reviewed and commented on.
> >>> >
> >>> > Cheers,
> >>> >
> >>> > Wayne
> >>> >
> >>> > On 6/11/19 10:17 AM, Seppe Stas wrote:
> >>> > > Hey
> >>> > >
> >>> > > I closed the merge request on Launchpad and re-attached the
> patch and
> >>> > > before and after screenshots (the after being built from a
> dirty
> >>> > master
> >>> > > branch) to this mail:
> >>> > >
> >>> > > Before:
> >>> > > [image: Screenshot 2019-06-05 at 22.46.43.png]
> >>> > > After:
> >>> > > [image: Screenshot 2019-06-05 at 22.46.54.png]
> >>> > >
> >>> > > As you can see, having this version information displayed in
> Spotlight
> >>> > > makes choosing the correct KiCad version a lot easier. It works
> >>> > for the
> >>> > > other apps (EEschema, PCBNew, ...) as well.
> >>> > >
> >>> > > Greetings
> >>> > > Seppe
> >>> > >
> >>> > > On Tue, Jun 11, 2019 at 2:50 PM Seth Hillbrand <
> s...@hillbrand.org
> >>> > <mailto:s...@hillbrand.org>> wrote:
> >>> > >
> >>> > >> Hi Seppe-
> >>> > >>
> >>> > >> I see this e-mail.  Perhaps it was a launchpad hiccup.
> >>> > >>
> >>> > >> I've added Adam to the code review at [1].  Would you mind
> re-sending
> >>> > >> the images to the list?
> >>> > >>
> >>> > >> Thanks-
> >>> > >> Seth
> >>> > >>
> >>> > >> [1]
> >>> >
> https://code.launchpad.net/~seppestas/kicad/+git/kicad/+merge/368644
> >>> > >>
> >>> > >> On 2019-06-11 05:23, Seppe Stas wrote:
> >>> > >>> Hey
> >>> > >>>
> >>> > >>> I'm not sur

Re: [Wikidata] Scaling Wikidata Query Service

2019-06-12 Thread Stas Malyshev
Hi!

>> So there needs to be some smarter solution, one that we'd unlike to
> develop inhouse
> 
> Big cat, small fish. As wikidata continue to grow, it will have specific
> needs.
> Needs that are unlikely to be solved by off-the-shelf solutions.

Here I think it's good place to remind that we're not Google, and
developing a new database engine inhouse is probably a bit beyond our
resources and budgets. Fitting existing solution to our goals - sure,
but developing something new of that scale is probably not going to happen.

> FoundationDB and WiredTiger are respectively used at Apple (among other
> companies)
> and MongoDB since 3.2 all over-the-world. WiredTiger is also used at Amazon.

I believe they are, but I think for our particular goals we have to
limit themselves for a set of solution that are a proven good match for
our case.

>> We also have a plan on improving the throughput of Blazegraph, which
> we're working on now.
> 
> What is the phabricator ticket? Please.

You can see WDQS task board here:
https://phabricator.wikimedia.org/tag/wikidata-query-service/

> That will be vendor lock-in for wikidata and wikimedia along all the
> poor souls that try to interop with it.

Since Virtuoso is using standard SPARQL, it won't be too much of a
vendor lock in, though of course the standard does not cover all, so
some corners are different in all SPARQL engines. This is why even
migration between SPARQL engines, even excluding operational aspects, is
non-trivial. Of course, migration to any non-SPARQL engine would be
order of magnitude more disruptive, so right now we do not seriously
consider doing that.

> It has two backends: MMAP and rocksdb.

Sure, but I was talking about the data model - ArangoDB sees the data as
set of documents. RDF approach is a bit different.

> ArangoDB is a multi-model database, it support:

As I already mentioned, there's a difference between "you can do it" and
"you can do it efficiently". Graphs are simple creatures, and can be
modeled on many backends - KV, document, relational, column store,
whatever you have. The tricky part starts when you need to run millions
of queries on 10B triples database. If your backend is not optimal for
that task, it's not going to perform.

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Kicad-developers] [PATCH] Set KiCad version in MacOS apps

2019-06-11 Thread Seppe Stas
Hey Wayne

I attached my patch (generated with `git format-patch --attach
origin/master`) to my last email as per
http://www.kicad-pcb.org/contribute/developers/#_submitting_patches. I have
a feeling Gmail might not like the mail headers in the patch.

I created a new patch without the --attach option and added it to this
email (I am more used to this patch format and I believe it worked in the
past).

Greetings
Seppe

On Tue, Jun 11, 2019 at 4:27 PM Wayne Stambaugh 
wrote:

> Seppe,
>
> I don't understand why your emails keep ending up on the moderated list
> but something strange is going on.  I had to moderate this one as well.
>  Please attach your patch (created using `git format-patch`) so it can
> be reviewed and commented on.
>
> Cheers,
>
> Wayne
>
> On 6/11/19 10:17 AM, Seppe Stas wrote:
> > Hey
> >
> > I closed the merge request on Launchpad and re-attached the patch and
> > before and after screenshots (the after being built from a dirty master
> > branch) to this mail:
> >
> > Before:
> > [image: Screenshot 2019-06-05 at 22.46.43.png]
> > After:
> > [image: Screenshot 2019-06-05 at 22.46.54.png]
> >
> > As you can see, having this version information displayed in Spotlight
> > makes choosing the correct KiCad version a lot easier. It works for the
> > other apps (EEschema, PCBNew, ...) as well.
> >
> > Greetings
> > Seppe
> >
> > On Tue, Jun 11, 2019 at 2:50 PM Seth Hillbrand 
> wrote:
> >
> >> Hi Seppe-
> >>
> >> I see this e-mail.  Perhaps it was a launchpad hiccup.
> >>
> >> I've added Adam to the code review at [1].  Would you mind re-sending
> >> the images to the list?
> >>
> >> Thanks-
> >> Seth
> >>
> >> [1]
> https://code.launchpad.net/~seppestas/kicad/+git/kicad/+merge/368644
> >>
> >> On 2019-06-11 05:23, Seppe Stas wrote:
> >>> Hey
> >>>
> >>> I'm not sure if this email got ignored or if it got rejected by some
> >>> mailing system, but it does not seem to show up in the mailing list
> >>> archive
> >>> <https://lists.launchpad.net/kicad-developers/date.html>.
> >>>
> >>> Maybe now it works?
> >>>
> >>> Seppe
> >>>
> >>> On Wed, Jun 5, 2019 at 10:55 PM Seppe Stas 
> wrote:
> >>>
> >>>> Hey guys and girls (probably mostly Adam in particular)
> >>>>
> >>>> Attached is a patch that sets the version in all MacOS apps to the
> >>>> value
> >>>> of KICAD_VERSION, i.e the value of git describe. See commit message
> >>>> for
> >>>> more technical details. This version show up when e.g launching the
> >>>> app
> >>>> using spotlight, and during the migration period from KiCad 4 to KiCad
> >>>> 5
> >>>> (that is still going on) I really miss(ed) this feature, since I have
> >>>> 3
> >>>> different versions of KiCad on my system (4, 5 and master).
> >>>>
> >>>> See attached screenshots:
> >>>> [image: Screenshot 2019-06-05 at 22.46.54.png]
> >>>> [image: Screenshot 2019-06-05 at 22.46.43.png]
> >>>>
> >>>> I tested this patch on both the latest master (6f8a0a4ee) and the 5.1
> >>>> branch (cd6da987c). I hope you consider adding it to a KiCad 5 release
> >>>> so I
> >>>> can use it when KiCad 6 comes out. (I hope I finished migrating to
> >>>> KiCad 5
> >>>> by then).
> >>>>
> >>>> Note that in order to update this value, CMake has to be re-run, but I
> >>>> would not expect this to be a big problem since I assume the releases
> >>>> are
> >>>> always built from scratch.
> >>>>
> >>>> Greeting
> >>>> Seppe Stas
> >>>>
> >>>
> >>> ___
> >>> Mailing list: https://launchpad.net/~kicad-developers
> >>> Post to : kicad-developers@lists.launchpad.net
> >>> Unsubscribe : https://launchpad.net/~kicad-developers
> >>> More help   : https://help.launchpad.net/ListHelp
> >>
> >
> >
> > ___
> > Mailing list: https://launchpad.net/~kicad-developers
> > Post to : kicad-developers@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~kicad-developers
> > More help   : https://help.launchpad.net/ListHelp
> >
>
> ___
> Mailing list: https://launchpad.net/~kicad-developers
> Post to : kicad-developers@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~kicad-developers
> More help   : https://help.launchpad.net/ListHelp
>


0001-Set-KiCad-version-in-MacOS-apps.patch
Description: Binary data
___
Mailing list: https://launchpad.net/~kicad-developers
Post to : kicad-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kicad-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Kicad-developers] [PATCH] Set KiCad version in MacOS apps

2019-06-11 Thread Seppe Stas
Hey

I closed the merge request on Launchpad and re-attached the patch and
before and after screenshots (the after being built from a dirty master
branch) to this mail:

Before:
[image: Screenshot 2019-06-05 at 22.46.43.png]
After:
[image: Screenshot 2019-06-05 at 22.46.54.png]

As you can see, having this version information displayed in Spotlight
makes choosing the correct KiCad version a lot easier. It works for the
other apps (EEschema, PCBNew, ...) as well.

Greetings
Seppe

On Tue, Jun 11, 2019 at 2:50 PM Seth Hillbrand  wrote:

> Hi Seppe-
>
> I see this e-mail.  Perhaps it was a launchpad hiccup.
>
> I've added Adam to the code review at [1].  Would you mind re-sending
> the images to the list?
>
> Thanks-
> Seth
>
> [1] https://code.launchpad.net/~seppestas/kicad/+git/kicad/+merge/368644
>
> On 2019-06-11 05:23, Seppe Stas wrote:
> > Hey
> >
> > I'm not sure if this email got ignored or if it got rejected by some
> > mailing system, but it does not seem to show up in the mailing list
> > archive
> > <https://lists.launchpad.net/kicad-developers/date.html>.
> >
> > Maybe now it works?
> >
> > Seppe
> >
> > On Wed, Jun 5, 2019 at 10:55 PM Seppe Stas  wrote:
> >
> >> Hey guys and girls (probably mostly Adam in particular)
> >>
> >> Attached is a patch that sets the version in all MacOS apps to the
> >> value
> >> of KICAD_VERSION, i.e the value of git describe. See commit message
> >> for
> >> more technical details. This version show up when e.g launching the
> >> app
> >> using spotlight, and during the migration period from KiCad 4 to KiCad
> >> 5
> >> (that is still going on) I really miss(ed) this feature, since I have
> >> 3
> >> different versions of KiCad on my system (4, 5 and master).
> >>
> >> See attached screenshots:
> >> [image: Screenshot 2019-06-05 at 22.46.54.png]
> >> [image: Screenshot 2019-06-05 at 22.46.43.png]
> >>
> >> I tested this patch on both the latest master (6f8a0a4ee) and the 5.1
> >> branch (cd6da987c). I hope you consider adding it to a KiCad 5 release
> >> so I
> >> can use it when KiCad 6 comes out. (I hope I finished migrating to
> >> KiCad 5
> >> by then).
> >>
> >> Note that in order to update this value, CMake has to be re-run, but I
> >> would not expect this to be a big problem since I assume the releases
> >> are
> >> always built from scratch.
> >>
> >> Greeting
> >> Seppe Stas
> >>
> >
> > ___
> > Mailing list: https://launchpad.net/~kicad-developers
> > Post to : kicad-developers@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~kicad-developers
> > More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~kicad-developers
Post to : kicad-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kicad-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Kicad-developers] [PATCH] Set KiCad version in MacOS apps

2019-06-11 Thread Seppe Stas
Hey

I'm not sure if this email got ignored or if it got rejected by some
mailing system, but it does not seem to show up in the mailing list archive
<https://lists.launchpad.net/kicad-developers/date.html>.

Maybe now it works?

Seppe

On Wed, Jun 5, 2019 at 10:55 PM Seppe Stas  wrote:

> Hey guys and girls (probably mostly Adam in particular)
>
> Attached is a patch that sets the version in all MacOS apps to the value
> of KICAD_VERSION, i.e the value of git describe. See commit message for
> more technical details. This version show up when e.g launching the app
> using spotlight, and during the migration period from KiCad 4 to KiCad 5
> (that is still going on) I really miss(ed) this feature, since I have 3
> different versions of KiCad on my system (4, 5 and master).
>
> See attached screenshots:
> [image: Screenshot 2019-06-05 at 22.46.54.png]
> [image: Screenshot 2019-06-05 at 22.46.43.png]
>
> I tested this patch on both the latest master (6f8a0a4ee) and the 5.1
> branch (cd6da987c). I hope you consider adding it to a KiCad 5 release so I
> can use it when KiCad 6 comes out. (I hope I finished migrating to KiCad 5
> by then).
>
> Note that in order to update this value, CMake has to be re-run, but I
> would not expect this to be a big problem since I assume the releases are
> always built from scratch.
>
> Greeting
> Seppe Stas
>
___
Mailing list: https://launchpad.net/~kicad-developers
Post to : kicad-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kicad-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Wikidata] Scaling Wikidata Query Service

2019-06-10 Thread Stas Malyshev
Hi!

> thanks for the elaboration. I can understand the background much better.
> I have to admit, that I am also not a real expert, but very close to the
> real experts like Vidal and Rahm who are co-authors of the SWJ paper or
> the OpenLink devs.

If you know anybody at OpenLink that would be interested in trying to
evaluate such thing (i.e. how Wikidata could be hosted on Virtuso) and
provide support for this project, it would be interesting to discuss it.
While open-source thing is still a barrier and in general the
requirements are different, at least discussing it and maybe getting
some numbers might be useful.

Thanks,
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Scaling Wikidata Query Service

2019-06-10 Thread Stas Malyshev
Hi!

> Yes, sharding is what you need, I think, instead of replication. This is
> the technique where data is repartitioned into more manageable chunks
> across servers.

Agreed, if we are to get any solution that is not constrained by
hardware limits of a single server, we can not avoid looking at sharding.

> Here is a good explanation of it:
> 
> http://vos.openlinksw.com/owiki/wiki/VOS/VOSArticleWebScaleRDF

Thanks, very interesting article. I'd certainly would like to know how
this works with database in the size of 10 bln. triples and queries both
accessing and updating random subsets of them. Updates are not covered
very thoroughly there - this is, I suspect, because many databases of 10
bln. size do not have as active (non-append) update workload as we do.
Maybe they still manage to solve it, if so, I'd very much like to know
about it.

> Just a note here: Virtuoso is also a full RDMS, so you could probably
> keep wikibase db in the same cluster and fix the asynchronicity. That is

Given how the original data is stored (JSON blob inside mysql table) it
would not be very useful. In general, graph data model and Wikitext data
model on top of which Wikidata is built are very, very different, and
expecting same storage to serve both - at least without very major and
deep refactoring of the code on both sides - is not currently very
realistic. And of course moving any of the wiki production databases to
Virtuoso would be a non-starter. Given than original Wikidata database
stays on Mysql - which I think is a reasonable assumption - there would
need to be a data migration pipeline for data to come from Mysql to
whatever is the WDQS NG storage.

> also true for any mappers like Sparqlify:
> http://aksw.org/Projects/Sparqlify.html However, these shift the
> problem, then you need a sharded/repartitioned relational database

Yes, relational-RDF bridges are known but my experience is they usually
are not very performant (the difference in "you can do it" and "you can
do it fast" is sometimes very significant) and in our case, it would be
useless anyway as Wikidata data is not really stored in relational
database per se - it's stored in JSON blob opaquely saved in relational
database structure that knows nothing about Wikidata. Yes, it's not the
ideal structure for optimal performance of Wikidata itself, but I do not
foresee this changing, at least in any short term. Again, we could of
course have data export pipeline to whatever storage format we want -
essentially we already have one - but the concept of having single data
store is probably not realistic at least within foreseeable timeframes.
We use separate data store for search (ElasticSearch) and probably will
have to have separate one for queries, whatever would be the mechanism.

Thanks,
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Scaling Wikidata Query Service

2019-06-10 Thread Stas Malyshev
special arrangement. Since this arrangement will probably not include
open-sourcing the enterprise part of Virtuoso, it should deliver a very
significant, I dare say enormous advantage for us to consider running it
in production. It may be possible that just OS version is also clearly
superior to the point that it is worth migrating, but this needs to be
established by evaluation.

> - I recently heard a presentation from Arango-DB and they had a good
> cluster concept as well, although I don't know anybody who tried it. The
> slides seemed to make sense.

We considered AgangoDB in the past, and it turned out we couldn't use it
efficiently on the scales we need (could be our fault of course). They
also use their own proprietary language for querying, which might be
worth it if they deliver us a clear win on all other aspects, but that
does not seem to be the case.
Also, AgangoDB seems to be document database inside. This is not what
our current data model is. While it is possible to model Wikidata in
this way, again, changing the data model from RDF/SPARQL to a different
one is an enormous shift, which can only be justified by an equally
enormous improvement in some other areas, which currently is not clear.
This project seems to be still very young. While I would be very
interested if somebody took on themselves to model Wikidata in terms of
ArangoDB documents, load the whole data and see what the resulting
performance would be, I am not sure it would be wise for us to invest
our team's - very limited currently - resources into that.

Thanks,
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] searching for Wikidata items

2019-06-04 Thread Stas Malyshev
Hi!

> Yes, the api is
> at https://www.wikidata.org/w/api.php?action=query=search=Bush

There's also
https://www.wikidata.org/w/api.php?action=wbsearchentities=Bush=en=json

This is what completion search in Wikidata is using.
-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Where did label filtering break recently and how?

2019-05-30 Thread Stas Malyshev
Hi!

> and if I enable any of the FILTER lines, it returns 0 results.
> What changed / Why ?

Thanks for reporting, I'll check into it.

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: Read-only access to temp tables for 2PC transactions

2019-05-22 Thread Stas Kelvich


> On 14 May 2019, at 12:53, Stas Kelvich  wrote:
> 
> Hi,
> 
> That is an attempt number N+1 to relax checks for a temporary table access
> in a transaction that is going to be prepared.
> 

Konstantin Knizhnik made off-list review of this patch and spotted
few problems.

* Incorrect reasoning that ON COMMIT DELETE truncate mechanism
should be changed in order to allow preparing transactions with
read-only access to temp relations. It actually can be be leaved
as is. Things done in previous patch for ON COMMIT DELETE may be
a performance win, but not directly related to this topic so I've
deleted that part.

* Copy-paste error with check conditions in
relation_open/relation_try_open.

Fixed version attached.

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



2PC-ro-temprels-v2.patch
Description: Binary data


Read-only access to temp tables for 2PC transactions

2019-05-14 Thread Stas Kelvich
Hi,

That is an attempt number N+1 to relax checks for a temporary table access
in a transaction that is going to be prepared.

One of the problems regarding the use of temporary tables in prepared 
transactions
is that such transaction will hold locks for a temporary table after being 
prepared.
That locks will prevent the backend from exiting since it will fail to acquire 
lock
needed to delete temp table during exit. Also, re-acquiring such lock after 
server
restart seems like an ill-defined operation.

I tried to allow prepared transactions that opened a temporary relation only in
AccessShare mode and then neither transfer this lock to a dummy PGPROC nor 
include
it in a 'prepare' record. Such prepared transaction will not prevent the 
backend from
exiting and can be committed from other backend or after a restart.

However, that modification allows new DDL-related serialization anomaly: it 
will be
possible to prepare transaction which read table A; then drop A; then commit the
transaction. I not sure whether that is worse than not being able to access temp
relations or not. On the other hand, it is possible to drop AccessShare locks 
only for
temporary relation and don't change behavior for an ordinary table (in the 
attached
patch this is done for all tables).

Also, I slightly modified ON COMMIT DELETE code path. Right now all ON COMMIT 
DELETE
temp tables are linked in a static list and if transaction accessed any temp 
table
in any mode then during commit all tables from that list will be truncated. For 
a
given patch that means that even if a transaction only did read from a temp 
table it
anyway can access other temp tables with high lock mode during commit. I've 
added
hashtable that tracks higher-than-AccessShare action with a temp table during
current transaction, so during commit, only tables from that hash will be 
truncated.
That way ON COMMIT DELETE tables in the backend will not prevent read-only 
access to
some other table in a given backend.

Any thoughts?

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



2PC-ro-temprels.patch
Description: Binary data


Re: [Wikidata] Are we ready for our future

2019-05-04 Thread Stas Malyshev
Hi!

> WQS data doesn't have versions, it doesn't have to be in one space and
> can easily be separated. The whole point of LOD is to decentralize your
> data. But I understand that Wikidata/WQS is currently designend as a
> centralized closed shop service for several reasons granted.

True, WDQS does not have versions. But each time the edit is made, we
now have to download and work through the whole 2M... It wasn't a
problem when we were dealing with regular-sized entities, but current
system certainly is not good for such giant ones.

As for decentralizing, WDQS supports federation, but for obvious reasons
federated queries are slower and less efficient. That said, if there
were separate store for such kind of data, it might work as
cross-querying against other Wikidata data wouldn't be very frequent.
But this is something that Wikidata community needs to figure out how to do.

-- 
Stas Malyshev
smalys...@wikimedia.org

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


  1   2   3   4   5   6   7   8   9   10   >