[Bug 1966911] Re: Cannot enroll finger on Jammy

2024-05-25 Thread RJ Edgerly
Comment
https://bugs.launchpad.net/libfprint-2-tod1-goodix/+bug/1966911/comments/42
says "libfprint-2-tod1-goodix only supports 27c6:533C", if so, might
that explain why the driver is ineffective on Dell XPS 13?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966911

Title:
  Cannot enroll finger on Jammy

To manage notifications about this bug go to:
https://bugs.launchpad.net/libfprint-2-tod1-goodix/+bug/1966911/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966911] Re: Cannot enroll finger on Jammy

2024-05-25 Thread RJ Edgerly
`fprintd-enroll` returns: "Impossible to enroll:
GDBus.Error:net.reactivated.Fprint.Error.NoSuchDevice: No devices
available"

`lsusb` shows Goodix MOC device 27c6:5335, 5335 appears unsupported on
fprint's list of supported devices:
https://fprint.freedesktop.org/supported-devices.html

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966911

Title:
  Cannot enroll finger on Jammy

To manage notifications about this bug go to:
https://bugs.launchpad.net/libfprint-2-tod1-goodix/+bug/1966911/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1966911] Re: Cannot enroll finger on Jammy

2024-05-25 Thread RJ Edgerly
Installed `libfprint-2-tod1-goodix/jammy,now 0.0.6-0ubuntu1~22.04.1` via
ppa:andch/staging-fprint on Ubuntu 22.04, but no "Fingerprint Login"
option appears in "Settings" -> "Users". This was after running `reboot`
to restart the machine: Dell XPS 13.

Source thread that led me here:
https://askubuntu.com/questions/1406999/i-cant-use-fingerprint-sensor-
in-ubuntu-22-04

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1966911

Title:
  Cannot enroll finger on Jammy

To manage notifications about this bug go to:
https://bugs.launchpad.net/libfprint-2-tod1-goodix/+bug/1966911/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Conselhobrasil] mdantas extended their membership

2024-04-02 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Alexander Pindarov (mdantas) renewed their own membership in the Ubuntu
Brasil - RJ (ubuntu-br-rj) team until 2025-04-15.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] profmarcilio extended their membership

2024-03-18 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Marcilio Bergami de Carvalho (profmarcilio) renewed their own membership
in the Ubuntu Brasil - RJ (ubuntu-br-rj) team until 2025-04-15.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] fcostapb extended their membership

2024-03-18 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Francisco Costa (fcostapb) renewed their own membership in the Ubuntu
Brasil - RJ (ubuntu-br-rj) team until 2025-04-15.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 2053041] Re: Issue when shutting down computer: internal hard drive is not shut down properly

2024-02-13 Thread RJ
I just bought new drives for 2k dollars because of this.
Then I realised it was this kernel update.

Can confirm, retracts (192) and hdd's gets hard shutdowns.

(Power cycle) also increases as if the PC had power loss.

This will break many old drives!

Backing to previous kernel solves it.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-signed-hwe-6.5 in Ubuntu.
https://bugs.launchpad.net/bugs/2053041

Title:
  Issue when shutting down computer: internal hard drive is not shut
  down properly

Status in linux-signed-hwe-6.5 package in Ubuntu:
  Confirmed

Bug description:
  After the last kernel upgrade, I noticed that, in the S.M.A.R.T.
  monitor, parameter ID 192 "Power Off Retract Count" increased every
  time I turned off the system. I also noticed the "click" of the head
  parking when shutting down the computer. By restarting the computer
  with previous kernel version (6.5.0-15.15~22.04.1) the issue didn't
  happen, so I assume it's a kernel bug, that causes it to shut down the
  computer before hard drive is deactivated

  ProblemType: Bug
  DistroRelease: Ubuntu 22.04
  Package: linux-image-6.5.0-17-generic 6.5.0-17.17~22.04.1
  ProcVersionSignature: Ubuntu 6.5.0-17.17~22.04.1-generic 6.5.8
  Uname: Linux 6.5.0-17-generic x86_64
  ApportVersion: 2.20.11-0ubuntu82.5
  Architecture: amd64
  CasperMD5CheckResult: unknown
  CurrentDesktop: XFCE
  Date: Tue Feb 13 16:35:55 2024
  InstallationDate: Installed on 2020-10-02 (1228 days ago)
  InstallationMedia: Ubuntu 20.04.1 LTS "Focal Fossa" - Release amd64 (20200731)
  SourcePackage: linux-signed-hwe-6.5
  UpgradeStatus: Upgraded to jammy on 2022-08-11 (550 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-signed-hwe-6.5/+bug/2053041/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


WebAssembly Support in JavaFX WebKit

2023-11-13 Thread RJ Sheperd
First, big thanks to the folks who have maintained this project.

I recently opened a "Feature Request" ticket for "WebAssembly Support in
JavaFX WebKit", and stated that I would be happy to help get this support
enabled. WebAssembly has been enabled in WebKit since 2017:
https://webkit.org/blog/7691/webassembly/

The basic gist is: when I run load the following HTML, I get "undefined"
rather than "object":

<-- index.html -->


  
Hello from Webkit!


[libraries] Reminder: Next AI Salon upcoming Oct 27 at 12pm EDT

2023-10-26 Thread RJ Hardeman
Please join us Friday, October 27 at 12 noon EDT (UTC-04:00:)

Friday, Oct 27, 2023
12pm EDT (UTC−04:00)
https://meta.wikimedia.org/wiki/Wikimedia_and_Libraries_User_Group/Salons/2023/October


On Mon, Oct 23, 2023 at 8:43 AM RJ Hardeman  wrote:

> Hello,
>
> Please join us for the second of three AI Salon panel discussions re the
> intersection of Wikibrarians and artificial intelligence. This one will be
> on GoogleMeet.
>
> Friday, Oct 27, 2023
> 12pm EDT (UTC−04:00)
>
> https://meta.wikimedia.org/wiki/Wikimedia_and_Libraries_User_Group/Salons/2023/October
>
> Thank you!
> Best,
> Rajene Hardeman
> Chair WLUG Steering Committee
>
> On Thu, Sep 21, 2023 at 5:51 PM RJ Hardeman  wrote:
>
>> Hello,
>>
>> Please join us for the first of three AI Salon panel discussions re the
>> intersection of Wikibrarians and artificial intelligence. This one will be
>> on GoogleMeet.
>>
>> Thursday, Sep 28, 2023
>> 10am EDT (UTC−04:00)
>>
>> https://meta.wikimedia.org/wiki/Wikimedia_and_Libraries_User_Group/Salons/2023/September#AI_Salon
>>
>> Thank you!
>> Best,
>> Rajene Hardeman
>> Chair WLUG Steering Committee
>>
>
___
Libraries mailing list -- libraries@lists.wikimedia.org
To unsubscribe send an email to libraries-le...@lists.wikimedia.org


[libraries] Next AI Salon upcoming Oct 27 at 12pm EDT

2023-10-23 Thread RJ Hardeman
Hello,

Please join us for the second of three AI Salon panel discussions re the
intersection of Wikibrarians and artificial intelligence. This one will be
on GoogleMeet.

Friday, Oct 27, 2023
12pm EDT (UTC−04:00)
https://meta.wikimedia.org/wiki/Wikimedia_and_Libraries_User_Group/Salons/2023/October

Thank you!
Best,
Rajene Hardeman
Chair WLUG Steering Committee

On Thu, Sep 21, 2023 at 5:51 PM RJ Hardeman  wrote:

> Hello,
>
> Please join us for the first of three AI Salon panel discussions re the
> intersection of Wikibrarians and artificial intelligence. This one will be
> on GoogleMeet.
>
> Thursday, Sep 28, 2023
> 10am EDT (UTC−04:00)
>
> https://meta.wikimedia.org/wiki/Wikimedia_and_Libraries_User_Group/Salons/2023/September#AI_Salon
>
> Thank you!
> Best,
> Rajene Hardeman
> Chair WLUG Steering Committee
>
___
Libraries mailing list -- libraries@lists.wikimedia.org
To unsubscribe send an email to libraries-le...@lists.wikimedia.org


[libraries] Re: AI Salon upcoming Sep 28 at 10am EDT

2023-09-27 Thread RJ Hardeman
Reminder - Please join our first AI Salon tomorrow, Thursday, Sep 28 at
10am EDT

Thursday, Sep 28, 2023
10am EDT (UTC−04:00)
https://meta.wikimedia.org/wiki/Wikimedia_and_Libraries_User_Group/Salons/2023/September#AI_Salon


Thank you!
Best,
Rajene Hardeman
Chair WLUG Steering Committee


On Thu, Sep 21, 2023 at 5:51 PM RJ Hardeman  wrote:

> Hello,
>
> Please join us for the first of three AI Salon panel discussions re the
> intersection of Wikibrarians and artificial intelligence. This one will be
> on GoogleMeet.
>
> Thursday, Sep 28, 2023
> 10am EDT (UTC−04:00)
>
> https://meta.wikimedia.org/wiki/Wikimedia_and_Libraries_User_Group/Salons/2023/September#AI_Salon
>
> Thank you!
> Best,
> Rajene Hardeman
> Chair WLUG Steering Committee
>
___
Libraries mailing list -- libraries@lists.wikimedia.org
To unsubscribe send an email to libraries-le...@lists.wikimedia.org


[libraries] AI Salon upcoming Sep 28 at 10am EDT

2023-09-21 Thread RJ Hardeman
Hello,

Please join us for the first of three AI Salon panel discussions re the
intersection of Wikibrarians and artificial intelligence. This one will be
on GoogleMeet.

Thursday, Sep 28, 2023
10am EDT (UTC−04:00)
https://meta.wikimedia.org/wiki/Wikimedia_and_Libraries_User_Group/Salons/2023/September#AI_Salon

Thank you!
Best,
Rajene Hardeman
Chair WLUG Steering Committee
___
Libraries mailing list -- libraries@lists.wikimedia.org
To unsubscribe send an email to libraries-le...@lists.wikimedia.org


Re: Daemon

2023-08-16 Thread Jason RJ



On 15/08/2023 17:34, Craig Parker wrote:
I'm trying to get back into docs, and currently finally have a working 
OFBiz/MySQL install. But the way I documented getting it running as a 
daemon (and starting via systemctl at boot) involved a 
/$ofbizRoot/tools/rc.ofbiz.for.debian file, and there's not even a 
tools directory.


Did I miss something, or there's just a different way now?



Hi Craig,

You will need to create a systemd unit file for ofbiz, there's some info 
here 
https://cwiki.apache.org/confluence/display/OFBIZ/Install+OFBiz+with+MariaDB%2C+Apache2+Proxy+and+SSL


If you are still using init.d then you will need a script able to 
execute server commands to start and stop ofbiz listed here 
https://github.com/apache/ofbiz-framework/tree/trunk#server-command-tasks



   Start OFBiz||


   |gradlew "ofbiz --start"|

Shutdown OFBiz

|gradlew "ofbiz --shutdown"|


   Get OFBiz status

|gradlew "ofbiz --status"|

|
|

Hope that helps

Jason



[libraries] WikiLibCon24 Community Survey, please respond prior to August 31

2023-08-04 Thread RJ Hardeman
Hello all,

Survey Link:  https://forms.gle/PKTWud749Fuxi1XV8
Please fill out the survey linked here as we are planning the 2nd
Wikimedia+Libraries International Convention for the first half of 2024. We
are still working on finalizing location and date details. More details to
come soon.

Please also share with library staff members who might be interested in
this convention (we foresee scholarship options) and have them take a
moment, before 31 August 2023, to fill out this “Community Survey”. Once
the survey is completed, we will seek funding from the Wikimedia Foundation
for the 2nd Wikimedia+Libraries International Convention.

Survey Link:  https://forms.gle/PKTWud749Fuxi1XV8

Thank you!
Rajene
Chair, Wikimedia and Library User Group

Steering Committee
___
Libraries mailing list -- libraries@lists.wikimedia.org
To unsubscribe send an email to libraries-le...@lists.wikimedia.org


[ovirt-users] Ovirt_Provider_Citrix-Xen

2023-07-20 Thread rj . indramaya
Hi All,
First of all apology If i'm doing it wrong way. I could not find any solution 
so I hoped I'll get some answers here.
In my Organization I started exploring the Ovirt-Engine, While I have created 
multiple VMs and all working fine I started facing one issue as in my 
organization we had Citrix Xen server which is out of date so I started working 
on migrating them directly to Ovirt-enigne. so I tried to add my Xen-server as 
a provider but its failing to load or connect with the Citrix-Xen. Let me know 
how should I move forward,


Thanks in advance, people are amazing.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PGXE6UFRRIRXFRUV2SUL6ERUJMBXBROS/


Re: User Entered Text on Order

2023-07-13 Thread Jason RJ
Actually Bill I see you've said you are using a configurable item for 
the thread color, so you probably won't have the comment box visible as 
they are missing from configurable product screen template. You might 
consider using a virtual product with the thread colors features to 
drive variants instead if that's the case, then you'll get the comment 
box back.


Failing that a small code change to pull the comment code into 
ConfigProductDetail.ftl


    <#assign commentEnable = 
Static["org.apache.ofbiz.entity.util.EntityUtilProperties"]
    .getPropertyValue("order", "order.item.comment.enable", 
delegator)>

    <#if commentEnable.equals("Y")>
  <#assign orderItemAttr = 
Static["org.apache.ofbiz.entity.util.EntityUtilProperties"]
  .getPropertyValue("order", "order.item.attr.prefix", 
delegator)>

  
     
${uiLabelMap.CommonComment}  class="form-control" name="${orderItemAttr}comment" 
id="${orderItemAttr}comment"/>

  
    

Jason

On 13/07/2023 21:07, Jason RJ wrote:

Hi Bill

I don't think you have another OOTB option except the 'comment' box on 
the product page. You could add some text to the product description 
to tell the user that they should provide their chosen name in the 
comment field. This gets carried through to an Order Item Attribute 
and is shown on the checkout screen etc.


You might have comments disabled depending on the value of 
'order.item.comment.enable' in order.properties.


Jason

On 13/07/2023 18:27, Bill Harder wrote:

Thank you Jason.

I see how that works now.  Are you aware of any other inputs that 
might be

on the same initial screen?

I am thinking about OOTB so I don't have to custom develop.

Bill

On Thu, Jul 13, 2023 at 8:34 AM Jason RJ wrote:


Hey Bill,

Take a look at the demo content and see how the Gift Cards are setup -
they have a product survey that captures the user's response to 
questions.



https://demo-stable.ofbiz.apache.org/ecommerce/gift-card-activation-c50-GC-001-C50-p 



Hope that helps,

Jason

On 13/07/2023 14:58, Bill Harder wrote:

Happy Thursday...

I am looking for a way where a user can enter custom text on an order

line

item.

A user orders a toolbag and wants to have his name embroidered on 
it in

Red

thread.  It's fairly easy to set up a configurable item with a thread

color
selection but where would be a good place to start for a name entry 
of 10

letters.

The only discussion I have been able to find is from back in 2016 on
MarkMail.

https://ofbiz.markmail.org/search/?q=user-provided+data#query:user-provided%20data+page:1+mid:qj53fjm3s5qydvmh+state:results 


Any pointers are most appreciated. Thanks.

Bill





Re: User Entered Text on Order

2023-07-13 Thread Jason RJ

Hi Bill

I don't think you have another OOTB option except the 'comment' box on 
the product page. You could add some text to the product description to 
tell the user that they should provide their chosen name in the comment 
field. This gets carried through to an Order Item Attribute and is shown 
on the checkout screen etc.


You might have comments disabled depending on the value of 
'order.item.comment.enable' in order.properties.


Jason

On 13/07/2023 18:27, Bill Harder wrote:

Thank you Jason.

I see how that works now.  Are you aware of any other inputs that might be
on the same initial screen?

I am thinking about OOTB so I don't have to custom develop.

Bill

On Thu, Jul 13, 2023 at 8:34 AM Jason RJ  wrote:


Hey Bill,

Take a look at the demo content and see how the Gift Cards are setup -
they have a product survey that captures the user's response to questions.


https://demo-stable.ofbiz.apache.org/ecommerce/gift-card-activation-c50-GC-001-C50-p

Hope that helps,

Jason

On 13/07/2023 14:58, Bill Harder wrote:

Happy Thursday...

I am looking for a way where a user can enter custom text on an order

line

item.

A user orders a toolbag and wants to have his name embroidered on it in

Red

thread.  It's fairly easy to set up a configurable item with a thread

color

selection but where would be a good place to start for a name entry of 10
letters.

The only discussion I have been able to find is from back in 2016 on
MarkMail.


https://ofbiz.markmail.org/search/?q=user-provided+data#query:user-provided%20data+page:1+mid:qj53fjm3s5qydvmh+state:results

Any pointers are most appreciated.  Thanks.

Bill





Re: User Entered Text on Order

2023-07-13 Thread Jason RJ

Hey Bill,

Take a look at the demo content and see how the Gift Cards are setup - 
they have a product survey that captures the user's response to questions.


https://demo-stable.ofbiz.apache.org/ecommerce/gift-card-activation-c50-GC-001-C50-p

Hope that helps,

Jason

On 13/07/2023 14:58, Bill Harder wrote:

Happy Thursday...

I am looking for a way where a user can enter custom text on an order line
item.

A user orders a toolbag and wants to have his name embroidered on it in Red
thread.  It's fairly easy to set up a configurable item with a thread color
selection but where would be a good place to start for a name entry of 10
letters.

The only discussion I have been able to find is from back in 2016 on
MarkMail.
https://ofbiz.markmail.org/search/?q=user-provided+data#query:user-provided%20data+page:1+mid:qj53fjm3s5qydvmh+state:results

Any pointers are most appreciated.  Thanks.

Bill


Re: Lost my dashboard

2023-06-15 Thread RJ
Dmytri
Thx. I thought I had something displayable enabled. Enabling Favorites gets me 
a dash. Duh!

Maybe devs could display a placeholder to help the somewhat-challenged... 

Have the good day!
Rufus

Jun 14, 2023 23:44:54 Dmytro Prodchenko :

> img3.jpg is dashboard. img2.jpg shows that mostly all cards on dashboard are 
> disabled, so it looks empty, it also shows that you can access dashboard 
> directly from the main map screen (instead of open Menu on the left).
> 
> On Tuesday, 13 June 2023 at 21:20:09 UTC+3 rlag...@mail.com wrote:
>> Dmytro
>> 
>> Heres some screens. Osmand maybe makes room for the dashbd but doesnt 
>> display it?
>> 
>> Rufus
>> 
>> Jun 12, 2023 23:58:54 Dmytro Prodchenko :
>> 
>>>  
>>> Hi, Rufus! Could you please provide screenshot?
>>> Please check that "Dashboard" item is added in Menu – Settings – Profile – 
>>> UI Customization – Drawer.
>>> 
>>> On Sunday, 11 June 2023 at 23:09:20 UTC+3 rlag...@mail.com wrote:
 Running 4.4.7
 Samsung S22u, 12GB
 
 Did something and lost the dashboard. Entered setting as posted above. 
 Behavior changed slightly but dashboard still does not appear.  Now, 
 additionally, clicking Menu (lower left) causes all buttons except the 
 Config (top left) and the DashboardSettings (top right) to disappear _and_ 
 moves the Search button to the top 1/4 of the screen. Zoom buttons, 
 navigation button, distance legend, all disappear. The bottom 3/4 of the 
 screen is dead - no buttons, won't pan, nothing. Clicking anywhere in the 
 upper 1/4 of the screen returns the normal buttons to the bottom and the 
 screen can be panned and zoomed again.
 
 Under DashboardConfig, I have set OpenAtStart and MenuButtonOpensDashboard.
 
 Also, now (and before? I dunno), hitting the Search button returns me to 
 my current position instead of where I was looking at the map (1300 miles 
 distant).
 
 This behavior is really unhelpful and mining Settings for magic buttons is 
 very tedious and not at all productive.  Ideas appreciated.
 
 Thx
 Rufus
 
 On Sunday, April 30, 2023 at 11:34:05 PM UTC-7 ruhe...@gmail.com wrote:
> Your mentioned settings were set. But, ok , I did a real discovery tour.
> I discovered, that the button for Quick action wasn´t no longer on the 
> screen. And in the bottom left corner I ´d a button never seen before. 
> Then  , when clicking on  this button the screen for Quick action appears 
> instead of the main menue.
> Could it be that I´ve got two button compiled on another. Yes.
> I did  not know why, when and how I´ve got the button Quick action  
> (transparent) covering the button for the main menue.
> And then tryed to click this scrambled button and hold it and move 
> it..
> Yes that´s it. The button for Quick action was  moved to the place above 
> the main menue button.
> Ok I didn´t image that the buttons in the menue were movable. Why not! 
> But should know.
> Thank you lo.                      
> 
> lodrog...@gmail.com schrieb am Samstag, 29. April 2023 um 15:23:47 UTC+2:
>> …
>>> 
>>> -- 
>>> You received this message because you are subscribed to a topic in the 
>>> Google Groups "OsmAnd" group.
>>> To unsubscribe from this topic, visit 
>>> https://groups.google.com/d/topic/osmand/dIFwyn20nbw/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to 
>>> osmand+un...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/osmand/295bd5eb-4a9f-44bd-9b0b-a11aa5662a70n%40googlegroups.com[https://groups.google.com/d/msgid/osmand/295bd5eb-4a9f-44bd-9b0b-a11aa5662a70n%40googlegroups.com?utm_medium=email_source=footer].
> 
> -- 
> You received this message because you are subscribed to a topic in the Google 
> Groups "OsmAnd" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/osmand/dIFwyn20nbw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> osmand+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/osmand/e17d9c35-d483-49a2-b0f5-421e9718de92n%40googlegroups.com[https://groups.google.com/d/msgid/osmand/e17d9c35-d483-49a2-b0f5-421e9718de92n%40googlegroups.com?utm_medium=email_source=footer].

-- 
You received this message because you are subscribed to the Google Groups 
"OsmAnd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to osmand+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/osmand/aab1bc72-d6ba-47c0-9da6-9b3a2c283d66%40mail.com.


[DISCUSS][DSIP-19][DataSource-Plugin、UI] Expand the data source center into a connection center

2023-06-13 Thread rj chen
Hi, community
This is a correction to the previous email

Issue Name: Expand the data source center into a connection center

Issue Background:
1. The DolphinScheduler has a component of the Datasource Center
that is used to manage external connections for SQL tasks, such as MySQL,
hive, spark, etc.
2. But not only SQL tasks, but also some other
DolphinScheduler task plugins require external connections, such as AWS EMR
tasks, Zeppelin tasks, K8S tasks, etc. We can enrich the scenarios that
require the Datasource Center to manage connections, especially those
external systems with credentials, and upgrade them to the Connection

Center.Related Issues:
[Feature] Add connection center feature for DS #10283


Issue Target:
1. Change the name of the Datasource Center to Connection Center.
2. Reconstruct some AWS EMR, Zeppelin, K8S, and Sagemaker task plugins to
facilitate user
management of external connections in the connection center.
3. Cluster Management and K8S Namespace Management in the security center
are removed
because managing K8S clusters is not the job of Big data orchestration
tools. Users can configure K8S connections for K8S task plugins in the
connection center.

Design scheme (taking Zeppelin as an example):
You can see it in the first comment.


1. Support for Zepplein connections in the Connection Center

Add the `zeppelin` module to the `datasource-plugin` module, and add key
classes such as `ZeppelinConnectionParam`, ` ZeppelinDataSourceParamDTO`,
`ZeppelinDataSourceProcessor`, `ZeppelinDataSourceChannel`,
`ZeppelinDataSourceChannelFactory`, and `ZeppelinDataSourceClient`.

2. Support the selection of online Zeppelin connections in the Zeppelintask.

a) Zeppelin task front-end interface changes (removed input for username
and password).

b) Add the parameters restEndpoint, username, password to the model of
tasks/use zeppelin.ts.

c) Before the `use-zeppelin function return`, fill in the username,
password, and restEndpoint parameters with the parameters of the connection
selected in the drop-down box. Obtain the selected connection ID, query its
detailed information in the database based on the ID, and extract the
username, password, restEndpoint in the information, and password should
be encrypted and may need to be decrypted.


[DISCUSS][DSIP][Improvement][datasource-plugin、UI] Expand the data source center into a connection center

2023-06-13 Thread rj chen
Issue Name: Expand the data source center into a connection centerIssue
Background:1、The DolphinScheduler has a component of the Datasource Center
that is used to manage external connections for SQL tasks, such as MySQL,
hive, spark, etc.2、But not only SQL tasks, but also some other
DolphinScheduler task plugins require external connections, such as AWS EMR
tasks, Zeppelin tasks, K8S tasks, etc. We can enrich the scenarios that
require the Datasource Center to manage connections, especially those
external systems with credentials, and upgrade them to the Connection
Center.Related Issues:

[Feature] Add connection center feature for DS #10283


### Issue Target:
1、Change the name of the Datasource Center to Connection Center.2、Reconstruct
some AWS EMR, Zeppelin, K8S, and Sagemaker task plugins to facilitate user
management of external connections in the connection center.3、Cluster
Management and K8S Namespace Management in the security center are removed
because managing K8S clusters is not the job of Big data orchestration
tools. Users can configure K8S connections for K8S task plugins in the
connection center.Design scheme (taking Zeppelin as an example):1、Support
for Zepplein connections in the Connection Center

   -

   Add the zeppelin module to the datasource-plugin module, and add key
   classes such as ZeppelinConnectionParam, ZeppelinDataSourceParamDTO,
   ZeppelinDataSourceProcessor, ZeppelinDataSourceChannel,
   ZeppelinDataSourceChannelFactory, and ZeppelinDataSourceClient。

   *Several key base class designs that need to be discussed*:
   ZeppelinConnectionParam:

@Data
@JsonInclude(JsonInclude.Include.NON_NULL)
public class ZeppelinConnectionParam implements ConnectionParam {

protected String user;

protected String password;

protected String host;

protected int port = 8080;
}

   ZeppelinDataSourceProcessor:

   This class implements the DataSourceProcessorinterface and rewrites all
   methods inside, including the default testConnection method, which is
   used to detect whether Zepplein can connect.

@Override
public Result checkConnection(DbType type,
ConnectionParam connectionParam) {
Result result = new Result<>();
// something
if (type == DbType.ZEPPELIN) {
DataSourceProcessor zeppelinDataSourceProcessor =
DataSourceUtils.getDatasourceProcessor(type);
if
(zeppelinDataSourceProcessor.testConnection(connectionParam)) {
putMsg(result, Status.SUCCESS);
} else {
putMsg(result, Status.CONNECT_DATASOURCE_FAILURE);
}
return result;
}
// something
}

   -

   Front end design of zeppelin connection:

   [image: image-20230613113210281]

2、Support the selection of online Zeppelin connections in the Zeppelintask

   -

   Zeppelin task front-end interface changes (removed input for username
   and password)

   [image: image-20230613113422370]
   -

   Add the following parameters to the model of tasks/use zeppelin.ts

type: 'ZEPPELIN',
displayRows: 10,
restEndpoint: '',
username: '',
password: ''

   The final model is:

const model = reactive({
name: '',
taskType: 'ZEPPELIN',
flag: 'YES',
description: '',
timeoutFlag: false,
localParams: [],
environmentCode: null,
failRetryInterval: 1,
failRetryTimes: 0,
workerGroup: 'default',
delayTime: 0,
timeout: 30,
timeoutNotifyStrategy: ['WARN'],
type: 'ZEPPELIN',
displayRows: 10,
restEndpoint: '',
username: '',
password: ''
  } as INodeData)

   -

   Before the use-zeppelin function return, fill in the username, password,
   and restEndpoint parameters with the parameters of the connection
   selected in the drop-down box.

   Obtain the selected connection ID, query its detailed information in the
   database based on the ID, extract the username, password, restEndpoint
   in the information, and password should be encrypted and may need to be
   decrypted.

return {
json: [
  Fields.useName(from),
  ...Fields.useTaskDefinition({ projectCode, from, readonly,
data, model }),
  Fields.useRunFlag(),
  Fields.useCache(),
  Fields.useDescription(),
  Fields.useTaskPriority(),
  Fields.useWorkerGroup(),
  Fields.useEnvironmentName(model, !data?.id),
  ...Fields.useTaskGroup(model, projectCode),
  ...Fields.useFailed(),
  Fields.useDelayTime(model),
  ...Fields.useTimeoutAlarm(model),
  ...Fields.useZeppelin(model),
  Fields.usePreTasks()
] as IJsonItem[],
model
  }


[rancid] Has anyone worked on adding Transition Networks switches?

2023-04-19 Thread RJ Watt
Hello kind and helpful subscribers The model we have is "Transition
Networks SM24DPB Managed" and we haven't been able to find any rancid
related posts online or in this mailing list.
___
Rancid-discuss mailing list
Rancid-discuss@www.shrubbery.net
https://www.shrubbery.net/mailman/listinfo/rancid-discuss


[Conselhobrasil] flavio-bordoni expired from team

2023-04-17 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

The membership of Flavio Bordoni (flavio-bordoni) in the Ubuntu Brasil -
RJ (ubuntu-br-rj) team has expired.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


Re: Link YouTube channel to Google Ads

2023-04-04 Thread RJ
Thanks, I am doing well, hope you are doing well also. Its out of scope as 
its not a feature supported via the API yet ? Is there a plan to add it to 
the API as its a Google ads feature from what i understand 

On Tuesday, April 4, 2023 at 4:30:31 AM UTC-7 Google Ads API Forum Advisor 
wrote:

> Hi RJ,
>
> Thank you for reaching out to the Google Ads API support team. I hope that 
> you are doing well today.
>
> Linking of Youtube channels to Google Ads account is out of scope for our 
> team. With this, you may refer to this link 
> <https://support.google.com/google-ads/answer/3063482?hl=en#zippy=%2Clink-or-unlink-google-ads-accounts-in-youtube-recommended-approach>
>  which 
> you provided on how to link Youtube channels to Google Ads.
>
> You may also reach out instead to the YouTube Help section via this link 
> <https://support.google.com/youtube/?hl=en#topic=9257498> and Google Ads 
> product team via this link <https://support.google.com/google-ads/gethelp>, 
> as they are better equipped to provide guidance on this matter.
>
> Kind regards, 
> [image: Google Logo] Google Ads API Team 
>
> ref:_00D1U1174p._5004Q2kEzmQ:ref
>

-- 
-- 
=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
Also find us on our blog:
https://googleadsdeveloper.blogspot.com/
=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~

You received this message because you are subscribed to the Google
Groups "AdWords API and Google Ads API Forum" group.
To post to this group, send email to adwords-api@googlegroups.com
To unsubscribe from this group, send email to
adwords-api+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/adwords-api?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Google Ads API and AdWords API Forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to adwords-api+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/adwords-api/bb6f2da9-1fda-4b3e-a36d-df05c3af555dn%40googlegroups.com.


Link YouTube channel to Google Ads

2023-04-03 Thread RJ
Can i link youtube channel to google ads accounts via API ? I know there is 
the flow to do it in UI as mentioned 
here https://support.google.com/google-ads/answer/3063482?hl=en but i am 
looking for a way to do it via the API. 

Thanks 

-- 
-- 
=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
Also find us on our blog:
https://googleadsdeveloper.blogspot.com/
=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~

You received this message because you are subscribed to the Google
Groups "AdWords API and Google Ads API Forum" group.
To post to this group, send email to adwords-api@googlegroups.com
To unsubscribe from this group, send email to
adwords-api+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/adwords-api?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Google Ads API and AdWords API Forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to adwords-api+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/adwords-api/3340c2a2-8bc2-4a62-97ab-a2c93858ee41n%40googlegroups.com.


[Conselhobrasil] mdantas extended their membership

2023-04-02 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Alexander Pindarov (mdantas) renewed their own membership in the Ubuntu
Brasil - RJ (ubuntu-br-rj) team until 2024-04-15.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] profmarcilio extended their membership

2023-03-27 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Marcilio Bergami de Carvalho (profmarcilio) renewed their own membership
in the Ubuntu Brasil - RJ (ubuntu-br-rj) team until 2024-04-15.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] fcostapb extended their membership

2023-03-19 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Francisco Costa (fcostapb) renewed their own membership in the Ubuntu
Brasil - RJ (ubuntu-br-rj) team until 2024-04-15.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


Getting started with beancount for my business

2023-01-22 Thread RJ
Hi,

I'm starting to use beancount for my small business and I have some 
questions. Reading the beancount docs only got me so far, so I would like 
to hear whether I'm heading in the right direction.

Suppose I send an invoice to a client for a service I delivered to Customer 
X, I guess I would post it like this:


2023-03-2 * Invoice 2023_1 Things I did for you ^invoice_2023_1
Liabilities:Customers:X 1000.00 EUR
Income:Invoices-800.00 EUR
Liabilities:Tax:VAT -200

20% VAT is included in the invoice.

I'm struggling a bit here with the Assets vs. Liabilities and the signs 
here, should the customer account be an asset or a liability?

And then, when I receive the money from the customer I guess I would do:

2023-04-02 * Payment for invoice 2023_1 ^invoice_2023_1 
Liabilities:Customers:X -1000.00 EUR
Assets:Current   800
Assets:Current:VATReserve 200

I like to reserve the money for the taxes in a subaccount, does that make 
sense? Is there a better way to do that with beancount?

TIA

-- 
You received this message because you are subscribed to the Google Groups 
"Beancount" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beancount+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beancount/9d6bc6f9-5937-45c9-a88f-6bbf262466can%40googlegroups.com.


[jira] [Commented] (SEDONA-156) predicate pushdown support for GeoParquet

2023-01-05 Thread RJ Marcus (Jira)


[ 
https://issues.apache.org/jira/browse/SEDONA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17655105#comment-17655105
 ] 

RJ Marcus commented on SEDONA-156:
--

Jia is correct; my work schedule was reprioritized and I'm not currently 
working on it. 

For context, the road I was going down seemed like it would require significant 
changes upstream in Spark to allow Data Sources to push down V2 `Predicates` 
instead of `Filters` since our predicate is "postgresql" syntax instead of just 
sql. The large amount of upstream changes required became a significant 
roadblock for me in terms of time. 

It sounds like you have a simpler implementation, [~Kontinuation] , which is 
good in my opinion. If you are interested in the method I was trying, please 
reach out. 

> predicate pushdown support for GeoParquet
> -
>
> Key: SEDONA-156
> URL: https://issues.apache.org/jira/browse/SEDONA-156
> Project: Apache Sedona
>  Issue Type: New Feature
>Reporter: RJ Marcus
>Assignee: Kristin Cowalcijk
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Support for filter predicate for the new GeoParquet reader. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (SPARK-40955) allow DSV2 Predicate pushdown in FileScanBuilder.pushedDataFilter

2022-10-28 Thread RJ Marcus (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RJ Marcus updated SPARK-40955:
--
Description: 
{+}overall{+}: 


Allow FileScanBuilder to push `Predicate` instead of `Filter` for data filters 
being pushed down to source. This would allow new (arbitrary) DS V2 Predicates 
to be pushed down to the file source. 




Hello spark developers,

Thank you in advance for reading. Please excuse me if I make mistakes; this is 
my first time working on apache/spark internals. I am asking these questions to 
better understand whether my proposed changes fall within the intended scope of 
Data Source V2 API functionality.


+Motivation / Background:+

I am working on a branch in 
[apache/incubator-sedona|https://github.com/apache/incubator-sedona] to extend 
its support of geoparquet files to include predicate pushdown of postGIS style 
spatial predicates (e.g. `ST_Contains()`) that can take advantage of spatial 
info in file metadata. We would like to inherit as much as possible from the 
Parquet classes (because geoparquet basically just adds a binary geometry 
column). However, {{FileScanBuilder.scala}} appears to be missing some 
functionality I need for DSV2 {{{}Predicates{}}}.


+My understanding of the problem so far:+

The ST_* {{Expression}} must be detected as a pushable predicate 
(ParquetScanBuilder.scala:71) and passed as a {{pushedDataFilter}} to the 
{{parquetPartitionReaderFactory}} where it will be translated into a (user 
defined) {{{}FilterPredicate{}}}.

The [Filter class is 
sealed|https://github.com/apache/spark/blob/branch-3.3/sql/catalyst/src/main/scala/org/apache/spark/sql/sources/filters.scala]
 so the sedona package can’t define new Filters; DSV2 Predicate appears to be 
the preferred method for accomplishing this task (referred to as “V2 Filter”, 
SPARK-39966). However, `pushedDataFilters` in FileScanBuilder.scala is of type 
{{{}sources.Filter{}}}.

Some recent work (SPARK-39139) added the ability to detect user defined 
functions in  {{DataSourceV2Strategy.translateFilterV2() > 
V2ExpressionBuilder.generateExpression()}}  , which I think could accomplish 
detection correctly if {{FileScanBuilder}} called 
{{DataSourceV2Strategy.translateFilterV2()}} instead of 
{{{}DataSourceStrategy.translateFilter(){}}}.

However, changing {{FileScanBuilder}} to use {{Predicate}} instead of 
{{Filter}} would require many changes to all file based data sources. I don’t 
want to spend effort making sweeping changes if the current behavior of Spark 
is intentional.

 

+Concluding Questions:+

Should {{FileScanBuilder}} be pushing {{Predicate}} instead of {{Filter}} for 
data filters being pushed down to source? Or maybe in a FileScanBuilderV2?

If not, how can a developer of a data source push down a new (or user defined) 
predicate to the file source?

Thank you again for reading. Pending feedback, I will start working on a PR for 
this functionality.



[~beliefer] [~cloud_fan] [~huaxingao]   have worked on DSV2 related spark 
issues and I welcome your input. Please ignore this if I "@" you incorrectly.

  was:
{+}overall{+}: 
Allow FileScanBuilder to push `Predicate` instead of `Filter` for data filters 
being pushed down to source. This would allow new (arbitrary) DS V2 Predicates 
to be pushed down to the file source. 




Hello spark developers,

Thank you in advance for reading. Please excuse me if I make mistakes; this is 
my first time working on apache/spark internals. I am asking these questions to 
better understand whether my proposed changes fall within the intended scope of 
Data Source V2 API functionality.

+
Motivation / Background:+

I am working on a branch in 
[apache/incubator-sedona|https://github.com/apache/incubator-sedona] to extend 
its support of geoparquet files to include predicate pushdown of postGIS style 
spatial predicates (e.g. `ST_Contains()`) that can take advantage of spatial 
info in file metadata. We would like to inherit as much as possible from the 
Parquet classes (because geoparquet basically just adds a binary geometry 
column). However, {{FileScanBuilder.scala}} appears to be missing some 
functionality I need for DSV2 {{{}Predicates{}}}.

+
My understanding of the problem so far:+

The ST_* {{Expression}} must be detected as a pushable predicate 
(ParquetScanBuilder.scala:71) and passed as a {{pushedDataFilter}} to the 
{{parquetPartitionReaderFactory}} where it will be translated into a (user 
defined) {{{}FilterPredicate{}}}.

The [Filter class is 
sealed|https://github.com/apache/spark/blob/branch-3.3/sql/catalyst/src/main/scala/org/apache/spark/sql/sources/filters.scala]
 so the sedona package can’t define new Filters; DSV2 Predicate appears to be 
the preferred method for accomplishing this task (referred to as “V2 Filter”, 
SPARK-39966). However, `pushedDataFilters` in FileScanBuilder.scala is of type 
{{{}sources.Filter{}}}.

[jira] [Created] (SPARK-40955) allow DSV2 Predicate pushdown in FileScanBuilder.pushedDataFilter

2022-10-28 Thread RJ Marcus (Jira)
RJ Marcus created SPARK-40955:
-

 Summary: allow DSV2 Predicate pushdown in 
FileScanBuilder.pushedDataFilter 
 Key: SPARK-40955
 URL: https://issues.apache.org/jira/browse/SPARK-40955
 Project: Spark
  Issue Type: Improvement
  Components: Input/Output, SQL
Affects Versions: 3.3.1
Reporter: RJ Marcus


{+}overall{+}: 
Allow FileScanBuilder to push `Predicate` instead of `Filter` for data filters 
being pushed down to source. This would allow new (arbitrary) DS V2 Predicates 
to be pushed down to the file source. 




Hello spark developers,

Thank you in advance for reading. Please excuse me if I make mistakes; this is 
my first time working on apache/spark internals. I am asking these questions to 
better understand whether my proposed changes fall within the intended scope of 
Data Source V2 API functionality.

+
Motivation / Background:+

I am working on a branch in 
[apache/incubator-sedona|https://github.com/apache/incubator-sedona] to extend 
its support of geoparquet files to include predicate pushdown of postGIS style 
spatial predicates (e.g. `ST_Contains()`) that can take advantage of spatial 
info in file metadata. We would like to inherit as much as possible from the 
Parquet classes (because geoparquet basically just adds a binary geometry 
column). However, {{FileScanBuilder.scala}} appears to be missing some 
functionality I need for DSV2 {{{}Predicates{}}}.

+
My understanding of the problem so far:+

The ST_* {{Expression}} must be detected as a pushable predicate 
(ParquetScanBuilder.scala:71) and passed as a {{pushedDataFilter}} to the 
{{parquetPartitionReaderFactory}} where it will be translated into a (user 
defined) {{{}FilterPredicate{}}}.

The [Filter class is 
sealed|https://github.com/apache/spark/blob/branch-3.3/sql/catalyst/src/main/scala/org/apache/spark/sql/sources/filters.scala]
 so the sedona package can’t define new Filters; DSV2 Predicate appears to be 
the preferred method for accomplishing this task (referred to as “V2 Filter”, 
SPARK-39966). However, `pushedDataFilters` in FileScanBuilder.scala is of type 
{{{}sources.Filter{}}}.

Some recent work (SPARK-39139) added the ability to detect user defined 
functions in  {{DataSourceV2Strategy.translateFilterV2() > 
V2ExpressionBuilder.generateExpression()}}  , which I think could accomplish 
detection correctly if {{FileScanBuilder}} called 
{{DataSourceV2Strategy.translateFilterV2()}} instead of 
{{{}DataSourceStrategy.translateFilter(){}}}.

However, changing {{FileScanBuilder}} to use {{Predicate}} instead of 
{{Filter}} would require many changes to all file based data sources. I don’t 
want to spend effort making sweeping changes if the current behavior of Spark 
is intentional.

+
Concluding Questions:+

Should {{FileScanBuilder}} be pushing {{Predicate}} instead of {{Filter}} for 
data filters being pushed down to source? Or maybe in a FileScanBuilderV2?

If not, how can a developer of a data source push down a new (or user defined) 
predicate to the file source?

Thank you again for reading. Pending feedback, I will start working on a PR for 
this functionality.


[~beliefer] [~cloud_fan] [~huaxingao]   have worked on DSV2 related spark 
issues and I welcome your input. Please ignore this if I "@" you incorrectly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SEDONA-156) predicate pushdown support for GeoParquet

2022-09-23 Thread RJ Marcus (Jira)


[ 
https://issues.apache.org/jira/browse/SEDONA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608875#comment-17608875
 ] 

RJ Marcus commented on SEDONA-156:
--

Jia, thank you for the response! Overall it sounds like it’s headed in the 
right direction.

WRT “pushed filters” and “partition filters”, I thought that the _pushed_ 
filters are going to be using BBox during the scan process to skip files? I 
have extracted the bbox from the parquet metadata to pass into the 
pushedFilters sets of functions. I have been largely ignoring partition filters 
for now.

I took a look at the repositories in the neapowers website (itachi, bebe, 
sql-alchemy), and I am not convinced that they are applicable to this scenario 
because those Expressions are all data transformations (on existing sql 
datatypes) instead of predicate filters (so they don’t even have to worry about 
being pushed down anyway). If they did have custom predicate filters I think it 
would run into the same problem unless they use the operators that already 
exist in sql syntax ( <, > , =, OR, AND, NOT, >=, <=, IS NULL, IN, CONTAINS, 
etc. [see 
Filters|https://github.com/apache/spark/blob/branch-3.3/sql/catalyst/src/main/scala/org/apache/spark/sql/sources/filters.scala])

It’s good that the ST_* functions are defined as Expressions. I _think_ that we 
should be able to coerce them to work in the DatasourceV2 API.

The problem I mentioned earlier is that even though in V2 API the +pushFilters+ 
function takes in  {{Seq[Expression]}} , [the function that actually pushes the 
expression to the datasource is pushDataFilters. 
|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/FileScanBuilder.scala#L90-L95]That
 one takes {{Array[Filter]}} which cannot be extended to allow new definitions 
for our ST_* predicates. The ST_Within predicate basically gets to that 
DataSourceStrategy.translateFilter and then fails because it can’t be 
translated into a {{Filter}} .
{code:java}
 override def pushFilters(filters: Seq[Expression]): Seq[Expression] = {
val (partitionFilters, dataFilters) =
  DataSourceUtils.getPartitionFiltersAndDataFilters(partitionSchema, 
filters)
this.partitionFilters = partitionFilters
this.dataFilters = dataFilters
val translatedFilters = mutable.ArrayBuffer.empty[sources.Filter]
for (filterExpr <- dataFilters) {
  val translated = DataSourceStrategy.translateFilter(filterExpr, true)
  if (translated.nonEmpty) {
translatedFilters += translated.get
  }
}
pushedDataFilters = pushDataFilters(translatedFilters.toArray)
dataFilters
  }

  override def pushedFilters: Array[Predicate] = pushedDataFilters.map(_.toV2)

  /*
   * Push down data filters to the file source, so the data filters can be 
evaluated there to
   * reduce the size of the data to be read. By default, data filters are not 
pushed down.
   * File source needs to implement this method to push down data filters.
   */
  protected def pushDataFilters(dataFilters: Array[Filter]): Array[Filter] = 
Array.empty[Filter]
{code}
So, I _think_ that we can override +pushFilters+ and +pushedFilters+ to be able 
to translate the original filters (e.g. “ \{{ col1 == 5 }} “ ) the way it is 
doing now, then we ignore +pushDataFilters+ since it’s not used anywhere else. 
Finally, we rewrite a bunch of downstream functions in the existing 
ParquetScanBuilder/ParquetFilter which currently only deal with {{Filter}} . 
I’m attempting to rewrite these as minimally as possible.

I'll update with info after I've tried that

> predicate pushdown support for GeoParquet
> -
>
> Key: SEDONA-156
> URL: https://issues.apache.org/jira/browse/SEDONA-156
> Project: Apache Sedona
>  Issue Type: New Feature
>Reporter: RJ Marcus
>Priority: Major
> Fix For: 1.3.0
>
>
> Support for filter predicate for the new GeoParquet reader. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (SEDONA-156) predicate pushdown support for GeoParquet

2022-09-21 Thread RJ Marcus (Jira)


[ 
https://issues.apache.org/jira/browse/SEDONA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17607857#comment-17607857
 ] 

RJ Marcus commented on SEDONA-156:
--

Hello [~jiayu] , I have spent a while working on SEDONA-156 and I think that my 
approach might be misguided. Maybe you can tell me if I am making a big mistake 
by overlooking something.

Background: the current code in sedona is using the DataSource V1 API. If we 
want to use the V2 API we need classes in 
{{{}org/apache/spark/sql/execution/datasources/v2/parquet/{}}}. E.g. 
{{GeoParquetDataSourceV2.scala}} . Otherwise it defaults to loading as a V1 
source via {{{}GeoParquetFileFormat.scala{}}}.

 

—

 

V1 of the DataSource API pushes down predicates of type {{sources.Filter}} 
which is a sealed class, so I can’t add more definitions for these new types of 
filters. I thought that probably we want the {{Expression}} (s) in 
{{Predicates.scala}} ({{{}ST_Within(){}}}, {{{}ST_Intersects(){}}}, etc) to be 
recognized as the predicate and passed down, e.g.
{code:java}
 
sparkSession.read.format("geoparquet").load(path).filter("ST_Within(geometry,ST_PolygonFromEnvelope(-1,
 -1, 1, 1))") {code}
.

Otherwise we could “hijack” the existing {{sources.Filter}} syntax e.g. :
{code:java}
 sparkSession.read.format("geoparquet").load(path).filter(“ WHERE 
point_df.geometry IN ST_PolygonFromEnvelope(-1, -1, 1, 1)”) {code}
? Everywhere else in the V1 API ({{{}GeoParquetFileFormat.scala{}}}, 
{{{}GeoParquetFilters.scala{}}}) it uses {{sources.Filter}}

So the first question is, am I missing something here? It seems like a fairly 
big hurdle that the {{sources.Filter}} class is sealed. (This even causes 
problems in DataSourceV2 API). Would we want to “hijack” the existing 
{{sources.Filter}} syntax instead of using {{ST_*()}} syntax?

 

 

After SPARK-36351 was merged in (spark 3.3+), {{FileScanBuilder.pushFilters}} 
uses {{Expression}} instead of {{Filter}} with the new interface 
{{{}SupportsPushDownCatalystFilters{}}}. Based on the discussion of that PR, it 
seems like this update is meant mostly for internal bookkeeping of data filters 
vs partition filters, but I think that we could use it to correctly pass our 
data filter {{Expression}} into a {{Predicate}} in {{GeoParquetScanBuilder}} 
and then into a user-defined {{FilterPredicate}} in {{GeoParquetFilters}} . 
This requires some geo* copies of the parquet versions of the files in 
{{org/apache/spark/sql/execution/datasources/v2/parquet/}} for the V2 API, and 
overriding several methods to deal with {{Predicate}} (s) instead of 
{{{}sources.Filter{}}}. This is what I’m working on right now and I think I'm 
pretty close, but I am worried that it may be too complicated of a solution and 
I am overlooking something.

Second question: Does this DataSourceV2 approach sound promising or totally off 
track? 



Thank you for taking the time to read. I am still learning the codebase, but I 
have a much better understanding than I did two weeks ago. Any feedback is 
appreciated.

> predicate pushdown support for GeoParquet
> -
>
> Key: SEDONA-156
> URL: https://issues.apache.org/jira/browse/SEDONA-156
> Project: Apache Sedona
>      Issue Type: New Feature
>Reporter: RJ Marcus
>Priority: Major
> Fix For: 1.3.0
>
>
> Support for filter predicate for the new GeoParquet reader. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[DISCUSS][DSIP-13][python] New mechanism file plugins to Python API

2022-09-15 Thread rj chen
Hi community,
At present, the command executed by the task in the python Api is to pass
the specific content through the parameter. For example, shell task:
Shell(name="task_parent", command="echo hello pydolphinscheduler").
File is a good choice when the command executed by the task is very long.

Teacher Zhongjiajie guided me, we called it New mechanism file plugins to
Python API. Resource plugins can be used in workflows and tasks. When both
have resource plugins, the resource plugin in the task is used first.
Plugins I will implement include but are not limited to local file system,
GitHub, GitLab, Amazon S3, Alibaba Cloud OSS.

This is a tutorial for the local resource plugin, you can find it in PR[1],
it is the local resource plugin.
with ProcessDefinition(
name="tutorial_resource_plugin",
schedule="0 0 0 * * ? *",
start_time="2022-01-01",
tenant="tenant_exists",
resource_plugin=Local("/tmp"),
) as process_definition:
file = "resource.sh"
path = Path("/tmp").joinpath(file)
with open(str(path), "w") as f:
f.write("echo tutorial resource plugin")
task_parent = Shell(
name="local-resource-example",
command=file,
)
print(task_parent.task_params)
os.remove(path)
process_definition.run()

Below is my draft design for the shell task.
You can see the GitLab resource plugin in PR[2] to understand my design.

I abstracted a Resource class and created an abstract function read_file
for it. All resource plugins need to inherit from it and implement the
abstract function read_file.

Add property in shell
ext: set = {".sh", ".zsh"}
ext is used to detect that shell tasks can only accept files ending in .sh
and .zsh
ext_attr: str = "_raw_script"
At the same time, it is added in the init function
self._raw_script = command
ext_attr is used to save the file path,

Add a new parameter resource_plugin to the init function of the task to
accept resource plug-ins, add a new method to obtain specific resource
plug-ins, and also need to add attributes ext and ext_attr.

Additional details can be found in pr[1] and pr[2].
I'm very sorry, it's my first time to contribute and I didn't follow the
DSIP process, sorry.

I already add a GitHub Issue for my proposal, which you could see in
https://github.com/apache/dolphinscheduler/issues/10911
.

Looking forward any feedback for this thread.

[1]: https://github.com/apache/dolphinscheduler/pull/11360 <
https://github.com/apache/dolphinscheduler/pull/11360>
[2]: https://github.com/apache/dolphinscheduler/pull/11831 <
https://github.com/apache/dolphinscheduler/pull/11831>


[jira] [Commented] (SEDONA-156) predicate pushdown support for GeoParquet

2022-09-01 Thread RJ Marcus (Jira)


[ 
https://issues.apache.org/jira/browse/SEDONA-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17599073#comment-17599073
 ] 

RJ Marcus commented on SEDONA-156:
--

[~jiayu] [~Ashar] , Thank you for all of your work on sedona and the recent 
addition of the geoparquet reader. 

I would really like to help contribute to the predicate pushdown filter for 
geoparquet in order to get it into sedona 1.3.0 and hopefully not affect your 
workload too much.  If either of you already have ideas for this feature or can 
provide general direction, it would help me greatly as I become familiar with 
the geoparquet code (and spark/sedona codebases).

> predicate pushdown support for GeoParquet
> -
>
> Key: SEDONA-156
> URL: https://issues.apache.org/jira/browse/SEDONA-156
> Project: Apache Sedona
>  Issue Type: New Feature
>Reporter: RJ Marcus
>Priority: Major
> Fix For: 1.2.1
>
>
> Support for filter predicate for the new GeoParquet reader. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (SEDONA-156) predicate pushdown support for GeoParquet

2022-09-01 Thread RJ Marcus (Jira)
RJ Marcus created SEDONA-156:


 Summary: predicate pushdown support for GeoParquet
 Key: SEDONA-156
 URL: https://issues.apache.org/jira/browse/SEDONA-156
 Project: Apache Sedona
  Issue Type: New Feature
Reporter: RJ Marcus
 Fix For: 1.2.1


Support for filter predicate for the new GeoParquet reader. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (SPARK-40038) spark.sql.files.maxPartitionBytes does not observe on-disk compression

2022-08-10 Thread RJ Marcus (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-40038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17578169#comment-17578169
 ] 

RJ Marcus commented on SPARK-40038:
---

the screenshot with spillage didn't have the stage fully completed when I made 
the capture, which is why the total data input / output may not look correct 
(should be 230GB)

> spark.sql.files.maxPartitionBytes does not observe on-disk compression
> --
>
> Key: SPARK-40038
> URL: https://issues.apache.org/jira/browse/SPARK-40038
> Project: Spark
>  Issue Type: Question
>  Components: Input/Output, Optimizer, PySpark, SQL
>Affects Versions: 3.2.0
> Environment: files:
> - ORC with snappy compression
> - 232 GB files on disk 
> - 1800 files on disk (pretty sure no individual file is over 200MB)
> - 9 partitions on disk
> cluster:
> - EMR 6.6.0 (spark 3.2.0)
> - cluster: 288 vCPU (executors), 1.1TB memory (executors)
> OS info:
> LSB Version:    
> :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
> Distributor ID:    Amazon
> Description:    Amazon Linux release 2 (Karoo)
> Release:    2
> Codename:    Karoo
>Reporter: RJ Marcus
>Priority: Major
> Attachments: Screenshot from 2022-08-10 16-50-37.png, Screenshot from 
> 2022-08-10 16-59-56.png
>
>
> Why does `spark.sql.files.maxPartitionBytes` estimate the number of 
> partitions based on {_}file size on disk instead of the uncompressed file 
> size{_}?
> For example I have a dataset that is 213GB on disk. When I read this in to my 
> application I get 2050 partitions based on the default value of 128MB for 
> maxPartitionBytes. My application is a simple broadcast index join that adds 
> 1 column to the dataframe and writes it out. There is no shuffle.
> Initially the size of input /output records seem ok, but I still get a large 
> amount of memory "spill" on the executors. I believe this is due to the data 
> being highly compressed and each partition becoming too big when it is 
> deserialized to work on in memory.
> !image-2022-08-10-16-59-05-233.png!
> (If I try to do a repartition immediately after reading I still see the first 
> stage spilling memory to disk, so that is not the right solution or what I'm 
> interested in.) 
> Instead, I attempt to lower maxPartitionBytes by the (average) compression 
> ratio of my files (about 7x, so let's round up to 8). So I set 
> maxPartitionBytes=16MB.  At this point  I see that spark is reading in from 
> the file in 12-28 MB chunks. Now it makes 14316 partitions on the initial 
> file read and completes with no spillage. 
> !image-2022-08-10-16-59-59-778.png!
>  
> Is there something I'm missing here? Is this just intended behavior? How can 
> I tune my partition size correctly for my application when I do not know how 
> much the data will be compressed ahead of time?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-40038) spark.sql.files.maxPartitionBytes does not observe on-disk compression

2022-08-10 Thread RJ Marcus (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RJ Marcus updated SPARK-40038:
--
Attachment: Screenshot from 2022-08-10 16-59-56.png
Screenshot from 2022-08-10 16-50-37.png

> spark.sql.files.maxPartitionBytes does not observe on-disk compression
> --
>
> Key: SPARK-40038
> URL: https://issues.apache.org/jira/browse/SPARK-40038
> Project: Spark
>  Issue Type: Question
>  Components: Input/Output, Optimizer, PySpark, SQL
>Affects Versions: 3.2.0
> Environment: files:
> - ORC with snappy compression
> - 232 GB files on disk 
> - 1800 files on disk (pretty sure no individual file is over 200MB)
> - 9 partitions on disk
> cluster:
> - EMR 6.6.0 (spark 3.2.0)
> - cluster: 288 vCPU (executors), 1.1TB memory (executors)
> OS info:
> LSB Version:    
> :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
> Distributor ID:    Amazon
> Description:    Amazon Linux release 2 (Karoo)
> Release:    2
> Codename:    Karoo
>Reporter: RJ Marcus
>Priority: Major
> Attachments: Screenshot from 2022-08-10 16-50-37.png, Screenshot from 
> 2022-08-10 16-59-56.png
>
>
> Why does `spark.sql.files.maxPartitionBytes` estimate the number of 
> partitions based on {_}file size on disk instead of the uncompressed file 
> size{_}?
> For example I have a dataset that is 213GB on disk. When I read this in to my 
> application I get 2050 partitions based on the default value of 128MB for 
> maxPartitionBytes. My application is a simple broadcast index join that adds 
> 1 column to the dataframe and writes it out. There is no shuffle.
> Initially the size of input /output records seem ok, but I still get a large 
> amount of memory "spill" on the executors. I believe this is due to the data 
> being highly compressed and each partition becoming too big when it is 
> deserialized to work on in memory.
> !image-2022-08-10-16-59-05-233.png!
> (If I try to do a repartition immediately after reading I still see the first 
> stage spilling memory to disk, so that is not the right solution or what I'm 
> interested in.) 
> Instead, I attempt to lower maxPartitionBytes by the (average) compression 
> ratio of my files (about 7x, so let's round up to 8). So I set 
> maxPartitionBytes=16MB.  At this point  I see that spark is reading in from 
> the file in 12-28 MB chunks. Now it makes 14316 partitions on the initial 
> file read and completes with no spillage. 
> !image-2022-08-10-16-59-59-778.png!
>  
> Is there something I'm missing here? Is this just intended behavior? How can 
> I tune my partition size correctly for my application when I do not know how 
> much the data will be compressed ahead of time?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-40038) spark.sql.files.maxPartitionBytes does not observe on-disk compression

2022-08-10 Thread RJ Marcus (Jira)
RJ Marcus created SPARK-40038:
-

 Summary: spark.sql.files.maxPartitionBytes does not observe 
on-disk compression
 Key: SPARK-40038
 URL: https://issues.apache.org/jira/browse/SPARK-40038
 Project: Spark
  Issue Type: Question
  Components: Input/Output, Optimizer, PySpark, SQL
Affects Versions: 3.2.0
 Environment: files:
- ORC with snappy compression
- 232 GB files on disk 
- 1800 files on disk (pretty sure no individual file is over 200MB)
- 9 partitions on disk


cluster:
- EMR 6.6.0 (spark 3.2.0)
- cluster: 288 vCPU (executors), 1.1TB memory (executors)

OS info:
LSB Version:    
:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID:    Amazon
Description:    Amazon Linux release 2 (Karoo)
Release:    2
Codename:    Karoo
Reporter: RJ Marcus


Why does `spark.sql.files.maxPartitionBytes` estimate the number of partitions 
based on {_}file size on disk instead of the uncompressed file size{_}?

For example I have a dataset that is 213GB on disk. When I read this in to my 
application I get 2050 partitions based on the default value of 128MB for 
maxPartitionBytes. My application is a simple broadcast index join that adds 1 
column to the dataframe and writes it out. There is no shuffle.

Initially the size of input /output records seem ok, but I still get a large 
amount of memory "spill" on the executors. I believe this is due to the data 
being highly compressed and each partition becoming too big when it is 
deserialized to work on in memory.

!image-2022-08-10-16-59-05-233.png!

(If I try to do a repartition immediately after reading I still see the first 
stage spilling memory to disk, so that is not the right solution or what I'm 
interested in.) 

Instead, I attempt to lower maxPartitionBytes by the (average) compression 
ratio of my files (about 7x, so let's round up to 8). So I set 
maxPartitionBytes=16MB.  At this point  I see that spark is reading in from the 
file in 12-28 MB chunks. Now it makes 14316 partitions on the initial file read 
and completes with no spillage. 

!image-2022-08-10-16-59-59-778.png!
 
Is there something I'm missing here? Is this just intended behavior? How can I 
tune my partition size correctly for my application when I do not know how much 
the data will be compressed ahead of time?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[Conselhobrasil] fcostapb extended their membership

2022-04-15 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Francisco Costa (fcostapb) renewed their own membership in the Ubuntu
Brasil - RJ (ubuntu-br-rj) team until 2023-04-16.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] flavio-bordoni extended their membership

2022-04-11 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Flavio Bordoni (flavio-bordoni) renewed their own membership in the
Ubuntu Brasil - RJ (ubuntu-br-rj) team until 2023-04-17.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] profmarcilio extended their membership

2022-04-10 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Marcilio Bergami de Carvalho (profmarcilio) renewed their own membership
in the Ubuntu Brasil - RJ (ubuntu-br-rj) team until 2023-04-16.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] mdantas extended their membership

2022-04-10 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Alexander Pindarov (mdantas) renewed their own membership in the Ubuntu
Brasil - RJ (ubuntu-br-rj) team until 2023-04-16.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.


___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[jira] [Updated] (SPARK-38037) Spark MLlib FPGrowth not working with 40+ items in Frequent Item set

2022-01-29 Thread RJ (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RJ updated SPARK-38037:
---
Description: 
We have been using Spark FPGrowth and it works well with millions of 
transactions (records) when the frequent items in the Frequent Itemset is less 
than 25. Beyond 25 it runs into computational limit. For 40+ items in the 
Frequent Itemset the process never return.

To reproduce, you can create a simple data set of 3 transactions with equal 
items (40 of them) and run FPgrowth with 0.9 support, the process never 
completes. Below is a sample data I have used to narrow down the problem:
|I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|
|I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|
|I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|

 

While the computation grows (2^n -1) with each item in Frequent Itemset, it 
surely should be able to handle 40 or more items in a Frequest Itemset

 

Is this a FPGrowth implementation limitation,

are there any tuning parameters that I am missing? Thank you.

  was:
We have been using Spark FPGrowth and it works well with millions of 
transactions (records) when the frequent items in the Frequent Itemset is less 
than 25. Beyond 25 it runs into computational limit. For 40+ items in the 
Frequent Itemset the process never return.

To reproduce, you can create a simple data set of 3 transactions with equal 
items (40 of them) and run FPgrowth with 0.9 support, the process never 
completes. Below is a sample data I have used to narrow down the problem:
 
|I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|
|I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|
|I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|

 

While the computation grows (2^n -1) with each item in Frequent Itemset, it 
surely should be able to handle 40 or more items in a Frequest Itemset

 

 


> Spark MLlib FPGrowth not working with 40+ items in Frequent Item set
> 
>
> Key: SPARK-38037
> URL: https://issues.apache.org/jira/browse/SPARK-38037
> Project: Spark
>  Issue Type: Bug
>  Components: ML
>Affects Versions: 3.2.0
> Environment: Stanalone Linux server
> 32 GB RAM
> 4 core
>  
>Reporter: RJ
>Priority: Major
>
> We have been using Spark FPGrowth and it works well with millions of 
> transactions (records) when the frequent items in the Frequent Itemset is 
> less than 25. Beyond 25 it runs into computational limit. For 40+ items in 
> the Frequent Itemset the process never return.
> To reproduce, you can create a simple data set of 3 transactions with equal 
> items (40 of them) and run FPgrowth with 0.9 support, the process never 
> completes. Below is a sample data I have used to narrow down the problem:
> |I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|
> |I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|
> |I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|
>  
> While the computation grows (2^n -1) with each item in Frequent Itemset, it 
> surely should be able to handle 40 or more items in a Frequest Itemset
>  
> Is this a FPGrowth implementation limitation,
> are there any tuning parameters that I am missing? Thank you.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38037) Spark MLlib FPGrowth not working with 40+ items in Frequent Item set

2022-01-26 Thread RJ (Jira)
RJ created SPARK-38037:
--

 Summary: Spark MLlib FPGrowth not working with 40+ items in 
Frequent Item set
 Key: SPARK-38037
 URL: https://issues.apache.org/jira/browse/SPARK-38037
 Project: Spark
  Issue Type: Bug
  Components: ML
Affects Versions: 3.2.0
 Environment: Stanalone Linux server

32 GB RAM

4 core

 
Reporter: RJ


We have been using Spark FPGrowth and it works well with millions of 
transactions (records) when the frequent items in the Frequent Itemset is less 
than 25. Beyond 25 it runs into computational limit. For 40+ items in the 
Frequent Itemset the process never return.

To reproduce, you can create a simple data set of 3 transactions with equal 
items (40 of them) and run FPgrowth with 0.9 support, the process never 
completes. Below is a sample data I have used to narrow down the problem:
 
|I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|
|I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|
|I1|I2|I3|I4|I5|I6|I7|I8|I9|I10|I11|I12|I13|I14|I15|I16|I17|I18|I19|I20|I21|I22|I23|I24|I25|I26|I27|I28|I29|I30|I31|I32|I33|I34|I35|I36|I37|I38|I39|I40|

 

While the computation grows (2^n -1) with each item in Frequent Itemset, it 
surely should be able to handle 40 or more items in a Frequest Itemset

 

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Bug#1001272: Finding a solution

2021-12-10 Thread Thom RJ
I have rebuilt the package with the --enable-pango option.

Now jwm -v compiled options look like:
confirm icons jpeg nls pango png shape svg xbm xft xinerama xpm xrender

instead of:
confirm icons jpeg nls   png shape svg xbm xft xinerama xpm xrender

After rebuilding the package with pango enabled, my broken fonts issue was
gone.


Re: Price Rules and Customer Classifications

2021-12-08 Thread Jason RJ

Hi Justine

This is mostly from memory of situations I've seen, I seem to recall 
having issues with PartyClassifications in rules,  as an experiment it 
might be worth checking using a different condition on the price rule 
instead of Party Classification (even just a PartyId) you could also try 
setting them to $0.50 instead and see if that works just to sanity check 
the rule logic because the price calculations might be preferring a 
different non-zero price.


Jason

On 07/12/2021 21:56, Justine Nowak wrote:

Hello,

We have decided that some of our customers shouldn't be allowed to submit a
sales order. We have approached this by creating a category called "No
Sales" and when we want products to not be sold to a customer we add them
to this category. We also add our customers to a party classification. This
way we set up a price rule that sets the sale price for products under a
category and for users under a specific classification to $0.00 limiting
them from ordering that product.

But when we do this it doesn't work. How can we fix this?

Thanks Justine



Re: Product Specifications

2021-11-10 Thread Jason RJ

Hi Justine,

If you make the product feature that will provide the custom 
specification a DISTINGUISHING_FEATURE you can use this:


Add this to 
applications/order/groovyScripts/entry/catalog/ProductDetails.groovy 
around line 242 after disFeatureList is setup:


    // get the details of the Feature Application attributes
    productFeatureApplAttrs = 
delegator.findByAnd("ProductFeatureApplAttr", , [productId : productId], 
["attrName"], false);

    context.productFeatureApplAttrs = productFeatureApplAttrs;

And you can use this markup in 
ecommerce/template/catalog/ProductDetail.ftl probably replacing the 
existing disFeaturesList block at Lines 591-605:


            
                <#if disFeatureList?has_content>
                    
                    <#list disFeatureList as distFeature>
                        
                        class="${distFeature.idCode}">${distFeature.description}
                        <#if 
productFeatureApplAttrs?has_content>

                            
    <#list productFeatureApplAttrs 
as currentFeatureApplAttr>
    <#if 
currentFeatureApplAttr.productFeatureId?exists && 
currentFeatureApplAttr.productFeatureId == distFeature.productFeatureId>
    class="attrib-value">${currentFeatureApplAttr.attrName?default('')}${currentFeatureApplAttr.attrValue?default('')}

    
    
    
                        
                            
                    
                    
                
            

You will need some styling to make it look decent but something like 
this is achievable with the above https://imgur.com/a/IxcpZjJ


Jason

On 09/11/2021 22:26, Justine Nowak wrote:

Thank you Jason,

Also how do we show the product features to the eCommerce site?


On Tue, Nov 9, 2021 at 4:57 AM Jason RJ  wrote:


Hi Justine,

On the Product->Feature page at the bottom you can add "Product Feature
Attributes" so for example if you have a Product Feature assigned such
as "Camera" for a mobile phone, then you add Feature Attributes
Name/Value pairs e.g. "Flash":"Yes", "Main Camera":"12.0mp", "Image Geo
Tagging":"Yes".

Jason

On 08/11/2021 21:33, Justine Nowak wrote:

Hello,

How do we add custom product specifications and how do we display them on
the e-commerce website? This would have to be dynamic templating because
each product could have a different spec with a different value. Do we

use

attributes or features or something else?



Example:https://imgur.com/a/7R3hdYx

-Justine


Re: Product Specifications

2021-11-09 Thread Jason RJ

Hi Justine,

On the Product->Feature page at the bottom you can add "Product Feature 
Attributes" so for example if you have a Product Feature assigned such 
as "Camera" for a mobile phone, then you add Feature Attributes 
Name/Value pairs e.g. "Flash":"Yes", "Main Camera":"12.0mp", "Image Geo 
Tagging":"Yes".


Jason

On 08/11/2021 21:33, Justine Nowak wrote:

Hello,

How do we add custom product specifications and how do we display them on
the e-commerce website? This would have to be dynamic templating because
each product could have a different spec with a different value. Do we use
attributes or features or something else?



Example:https://imgur.com/a/7R3hdYx

-Justine


Re: how to render newly created ftl onto the page

2021-11-08 Thread Jason RJ

Maheshwari,

After changing the files have you executed loading the data to read in 
the content of the Theme.xml to the database?


>./gradlew "ofbiz --load-data readers=demo"

I'm not 100% sure if this is still necessary.

Jason

On 08/11/2021 08:06, Mahi maheshwari wrote:

Hi,

I have a newly created ftl file(NavSidebar.ftl) which I have to render onto
the page.
so far I  have done this,
1) have created a plugin for the new theme called "xerusTheme"
plugins/xerusTheme/data/XerusThemeDemoData.xml




2) plugins/xerusTheme/widget/Theme.xml  has this
http://www.w3.org/2001/XMLSchema-instance;
 xsi:noNamespaceSchemaLocation="
http://ofbiz.apache.org/dtds/widget-theme.xsd;>
 
 
  theme 
 
 
 

 

 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 
   =>have tried
without ['add'] also
 
 

 


3) added macro
plugins/xerusTheme/templates/macro/HTMLFormMacroLibrary.ftl

<#include
"component://common-theme/template/macro/HtmlFormMacroLibrary.ftl"/>
<#macro renderDisplayField type imageLocation idName description title
class alert inPlaceEditorUrl="" inPlaceEditorParams="">
 <#if description?has_content>
 *###*${description?replace("\n", "")}**<#t/>
 <#else>
 *##*<#t/>
 


4) plugins/xerusTheme/ofbiz-component.xml

http://www.w3.org/2001/XMLSchema-instance;
 xsi:noNamespaceSchemaLocation="
http://ofbiz.apache.org/dtds/ofbiz-component.xsd;>
 
 

 
 
 

 
 
 
 
 
 
 

 
 
 

 

 
 


I'm not getting what I'm missing.
all my ftl files are in  plugins/xerusTheme/template/  =>referThisImage


Header.ftl , Footer.ftl are rendering but NavSIdebar.ftl is not rendering.
I have made an entry in below mentioned files for NavSidebar.ftl
1) themes/common/widget/commomScreens.xml
screen name="GlobalDecorator" => added line no:152
  =>referThisImage


2) framework/common/data/commonTypeData.xml  => added line no:117 =>
referThisImage



(this added an entry in enumeration entity =>referThis
)

3) themes/common/widget/Theme.xml  => added line no:58 =>referToThisImage




I'm I missing something?

Thanks and Regards,
maheshwari.




[tw5] Re: Tiddlywiki Keyboard Navigation

2021-07-12 Thread RJ Skerry-Ryan
Awesome, I love the work that you're doing and am so thankful. :) Having 
more stuff in core when it makes sense sounds great to me.

On Discord, saqimtiaz pointed me at tw5-keyboard-navigation-plugin 
 
which works really well for my navigation/editing by keyboard needs at the 
moment.



On Monday, July 12, 2021 at 12:38:44 AM UTC-7 BurningTreeC wrote:

> Hello @Russel
>
> Besides this "Select Mode" I've done many experiments to find a solution 
> for the "focus tiddler" and over at GitHub 
>  we're 
> currently discussing another approach
> I've moved away from trying to implement a plugin on my own to trying to 
> get something to the tiddlywiki core. But I think there's still a lot of 
> work to do and help is needed
>
> best wishes, BTC
>
>
> russel...@gmail.com schrieb am Montag, 12. Juli 2021 um 05:50:41 UTC+2:
>
>> Great work! Select mode works really well. It looks like a possible 
>> solution for the "focus tiddler", "navigate river with j/k" and "edit 
>> current selected tiddler" feature requests (related issues 1537 
>> , 980 
>> ).
>>
>> Is the code available for this anywhere? The demo for 
>> https://github.com/BurningTreeC/tiddlywiki-muuri doesn't seem to have 
>> this?
>>
>> On Monday, July 23, 2018 at 6:15:44 AM UTC-7 BurningTreeC wrote:
>>
>>> I found that I only need one Mode - I call it select mode.
>>>
>>> I activate it using alt-S. When it activates it sets some keyboard 
>>> shortcuts active (adds the tag $:/tags/KeyboardShortcut to the already 
>>> created action-tiddlers ... leaving select-mode disables these shortcuts)
>>> these keyboard shortcuts have single letters assigned: E for editing, C 
>>> for closing/cancelling, D for deleting, Y for cloning tiddlers. These 
>>> shortcuts automatically also leave select-mode.
>>> Left/J/K navigates to the previous tiddler in the story river, 
>>> Right/L/Backquote navigates to the next
>>>
>>> I find this really handy and it works just with the keeboord plugin, no 
>>> other dependencies needed
>>> I think it'd be good to make the tiddler-edit-button leave edit-mode in 
>>> case it's on, so that one can instantly start typing without having to 
>>> press Escape first
>>>
>>> If you like it, let me know what you think
>>>
>>> BTC
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to tiddlywiki+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tiddlywiki/dabb9507-7ced-40fd-8705-bd99fe7fd13fn%40googlegroups.com.


[tw5] Re: Tiddlywiki Keyboard Navigation

2021-07-11 Thread RJ Skerry-Ryan
Great work! Select mode works really well. It looks like a possible 
solution for the "focus tiddler", "navigate river with j/k" and "edit 
current selected tiddler" feature requests (related issues 1537 
, 980 
).

Is the code available for this anywhere? The demo 
for https://github.com/BurningTreeC/tiddlywiki-muuri doesn't seem to have 
this?

On Monday, July 23, 2018 at 6:15:44 AM UTC-7 BurningTreeC wrote:

> I found that I only need one Mode - I call it select mode.
>
> I activate it using alt-S. When it activates it sets some keyboard 
> shortcuts active (adds the tag $:/tags/KeyboardShortcut to the already 
> created action-tiddlers ... leaving select-mode disables these shortcuts)
> these keyboard shortcuts have single letters assigned: E for editing, C 
> for closing/cancelling, D for deleting, Y for cloning tiddlers. These 
> shortcuts automatically also leave select-mode.
> Left/J/K navigates to the previous tiddler in the story river, 
> Right/L/Backquote navigates to the next
>
> I find this really handy and it works just with the keeboord plugin, no 
> other dependencies needed
> I think it'd be good to make the tiddler-edit-button leave edit-mode in 
> case it's on, so that one can instantly start typing without having to 
> press Escape first
>
> If you like it, let me know what you think
>
> BTC
>

-- 
You received this message because you are subscribed to the Google Groups 
"TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to tiddlywiki+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tiddlywiki/0723c087-9261-439a-ab17-a1436293d112n%40googlegroups.com.


JEP411: Missing use-case: Monitoring / restricting libraries

2021-05-31 Thread Harshad RJ
> In reply to Ron Pressier:
> Do you regularly use the Security Manager to sandbox your own dependencies 
> and find it convenient and effective
> — in which case, could you please describe your practice concretely so that 
> it would be possible to consider
> alternatives — or are you saying that you can *envision* such a powerful 
> use-case? The desire to remove the
> Security Manager does not stem from its theoretical utility, which is 
> absolutely amazing, but from its practical
> utility, which years of experience have found to be less than amazing after 
> all, and probably too low to justify
> its burdensome cost.

# Practical example of SecurityManager use
We are developing a security and privacy conscious browser in pure Java:
https://gngr.info

`gngr` uses `SecurityManager` at its core to not only sandbox external
libraries but also *internal modules*!

# External Libraries we use
* `Rhino` for Javascript
* `jStyleParser` for CSS parsing and analysing
* `okhttp` for HTTP/2 support (before the Java11 HTTP client was available)

These are huge libraries that are practically impossible to audit by a
small team, but through the SecurityManager we
were able to identify the following issues very easily:

## Security issues discovered so far
* Network access by the CSS parser library:
https://github.com/radkovo/jStyleParser/issues/14

* Use of reflection to make a private method accessible
https://github.com/square/okhttp/issues/3426

# Trust, but verify
We don't use `SecurityManager` because we don't trust the external
library authors; we do trust them else we wouldn't
be using those libraries! And given that these are popular projects,
we can also bank on oversight from the community
at large.

But what if we could also verify this trust and other assumptions, for
very little overhead? The SecurityManager does
require initial time investment, but IMO the returns justify it.

A good example of trust but verify, is our own internal modules. We
obviously trust our own code, yet we sandbox it,
to be sure that our assumptions are correct. For example, the `Cookie`
module inside the browser only needs File
access, in order to persist the cookies. Having it sandboxed relieves
us from worrying about some bug or debug code
mistakenly accessing the internet or any of the other myriad
capabilities that the code has access to.

# Numbers are misleading
One argument I have seen in support of JEP 411 is that there are very
few projects that use the SecurityManager. But
this outcome seems natural to me:
1. Many users and developers are not aware of Security threats, or
don't give enough priority to them.
2. The SecurityManager is hard to setup.

I expect #1 to change over a period of time, as more data gets
breached and more people learn from this experience.

#2 could be solved with better documentation, tutorials, etc. Or
through development of an alternative API.

(Imagine if WeakReferences were removed from the JVM because they were
not used by many projects! Of course, they won't
be used by many projects because they are not relevant to most of
them. However, for the projects that do use them, they
are very essential.)

# Logging events violates POLP
The alternative of using logging, with JFR for example, might be
suitable for certain use-cases, but is not a general
replacement for the protections provided by SecurityManager.

For one, it might not be possible to exercise all code-paths in tests.
Hence, the events made during testing might be
a subset of the actual behavior in production.

Secondly, it is not possible to know all events that need to be
captured in advance. Using a SecurityManager allows
us to follow a whitelisting policy, which is practical because the
white list is finite.

However, the alternative of logging events requires us to list all
possible capabilities in advance, which is impossible,
as the capabilities provided by the JVM or the standard library might
grow in a future version.

Thirdly, using Java agents with bytecode manipulation, as suggested on
this list, doesn't seem any easier than using
the SecurityManager.

best,
-- 
Harshad RJ


[Conselhobrasil] flavio-bordoni extended their membership

2021-04-11 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Flavio Bordoni (flavio-bordoni) renewed their own membership in the
Ubuntu Brasil - RJ (ubuntu-br-rj) team until 2022-04-17.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.

___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] profmarcilio extended their membership

2021-04-11 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Marcilio Bergami de Carvalho (profmarcilio) renewed their own membership
in the Ubuntu Brasil - RJ (ubuntu-br-rj) team until 2022-04-16.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.

___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] fcostapb extended their membership

2021-04-10 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Francisco Costa (fcostapb) renewed their own membership in the Ubuntu
Brasil - RJ (ubuntu-br-rj) team until 2022-04-16.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.

___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] mdantas extended their membership

2021-04-09 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Alexander Pindarov (mdantas) renewed their own membership in the Ubuntu
Brasil - RJ (ubuntu-br-rj) team until 2022-04-16.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.

___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


Re: [CentOS-virt] qemu-kvm-ev: usb: out-of-bounds r/w(CVE-2020-14364)

2021-03-28 Thread rj...@vip.qq.com
I have  reported on bugzilla, link: 
https://bugzilla.redhat.com/show_bug.cgi?id=1943399; But this seems to only 
support ovirt.
Then,opened an issue on CentOS community;link: 
https://bugs.centos.org/view.php?id=18131; 
Thanks.




jasonrao
 
From: centos-virt-request
Date: 2021-03-16 20:00
To: centos-virt
Subject: CentOS-virt Digest, Vol 159, Issue 2
Send CentOS-virt mailing list submissions to
centos-virt@centos.org
 
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.centos.org/mailman/listinfo/centos-virt
or, via email, send a message with subject or body 'help' to
centos-virt-requ...@centos.org
 
You can reach the person managing the list at
centos-virt-ow...@centos.org
 
When replying, please edit your Subject line so it is more specific
than "Re: Contents of CentOS-virt digest..."
 
 
Today's Topics:
 
   1. Re: qemu-kvm-ev: usb: out-of-bounds r/w(CVE-2020-14364)
  (Sandro Bonazzola)
   2. Unable to Login to AWS AMI With SSH Key - aarch64 (David Lemcoe)
   3. Re: Unable to Login to AWS AMI With SSH Key - aarch64
  (David Lemcoe)
 
 
--
 
Message: 1
Date: Mon, 15 Mar 2021 17:30:44 +0100
From: Sandro Bonazzola 
To: Discussion about the virtualization on CentOS

Subject: Re: [CentOS-virt] qemu-kvm-ev: usb: out-of-bounds
r/w(CVE-2020-14364)
Message-ID:

Content-Type: text/plain; charset="utf-8"
 
Il giorno mer 3 mar 2021 alle ore 09:56 rj...@vip.qq.com 
ha scritto:
 
> Hello
> I saw that qemu-kvm-rhev has fixed the issue, but  CentOS
> community hasn't updated the repaired version of qemu-kvm-ev;
> will it be fixed in the future?
>
 
Can you please open a BZ on
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-distribution=qemu-kvm-ev
?
Thanks
 
 
 
> thanks
> ___
> CentOS-virt mailing list
> CentOS-virt@centos.org
> https://lists.centos.org/mailman/listinfo/centos-virt
>
 
 
-- 
 
Sandro Bonazzola
 
MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
 
Red Hat EMEA <https://www.redhat.com/>
 
sbona...@redhat.com
<https://www.redhat.com/>
 
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.centos.org/pipermail/centos-virt/attachments/20210315/52e2ec49/attachment-0001.html>
 
--
 
Message: 2
Date: Mon, 15 Mar 2021 12:36:42 -0400
From: David Lemcoe 
To: centos-virt@centos.org
Subject: [CentOS-virt] Unable to Login to AWS AMI With SSH Key -
aarch64
Message-ID: <12d7b36c-db26-41d6-be8f-779153eca...@delcoe.com>
Content-Type: text/plain; charset="us-ascii"
 
When launching CentOS Stream for aarch64 in us-east-1 using the 
CentOS-sponsored AMI (ami-0a311be1169cd6581, found at 
https://wiki.centos.org/Cloud/AWS <https://wiki.centos.org/Cloud/AWS>) I am 
able to launch the EC2 instance using a Gravitron2 processor, as expected. 
However, when attempting to login to that instance, I get a password prompt for 
the ec2-user, centos, and root users. 
 
This behavior is not expected, because on the x86_64 AMIs the centos user is 
configured to use the SSH key selected in the AWS EC2 Launch Wizard, and a SSH 
login password is not required. 
 
In the aarch64 AMI, the centos and root usernames all prompt for password, and 
never seem to consider the provided SSH key.
 
This is the SSH command that I am using:
 
ssh -i ssh_key_selected_at_launch.pem centos@
 
This command results in a password prompt.
 
What is the process for connecting to the CentOS Stream AMI spun for aarch64?
 
Thank you!
 
David Lemcoe Jr.
 
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.centos.org/pipermail/centos-virt/attachments/20210315/1237a0c6/attachment-0001.html>
-- next part --
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1481 bytes
Desc: not available
URL: 
<http://lists.centos.org/pipermail/centos-virt/attachments/20210315/1237a0c6/attachment-0001.p7s>
 
--
 
Message: 3
Date: Mon, 15 Mar 2021 12:46:49 -0400
From: David Lemcoe 
To: centos-virt@centos.org
Subject: Re: [CentOS-virt] Unable to Login to AWS AMI With SSH Key -
aarch64
Message-ID: 
Content-Type: text/plain; charset="utf-8"
 
I have resolved my ?issue.? It would appear that CentOS 8 Stream for aarch64 
does not support `t4g.nano` instance sizes. Once I moved to t4g.small, my SSH 
login worked as expected.
 
Sorry to bother!
-- next part --
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1481 bytes
Desc: not available
URL: 
<http://lists.centos.org/pipermail/ce

Re: Firefox fails to create profile's permanent storage

2021-03-11 Thread RJ Johnson
> im not 100% sure at all, but *maybe* the method creating the dir
> hierarchy is
> https://searchfox.org/mozilla-central/source/xpcom/io/nsLocalFileUnix.cpp#360 
> .

You were right. I've created a bug on Bugzilla at
https://bugzilla.mozilla.org/show_bug.cgi?id=1697721 about this issue.

If you are interested, a patch-compatible version is below.


Index: xpcom/io/nsLocalFileUnix.cpp
--- xpcom/io/nsLocalFileUnix.cpp.orig
+++ xpcom/io/nsLocalFileUnix.cpp
@@ -365,6 +365,8 @@ nsLocalFile::CreateAllAncestors(uint32_t aPermissions)
   //  I promise to play nice
   char* buffer = mPath.BeginWriting();
   char* slashp = buffer;
+  int mkdir_result = 0;
+  int mkdir_errno;

 #ifdef DEBUG_NSIFILE
   fprintf(stderr, "nsIFile: before: %s\n", buffer);
@@ -393,9 +395,9 @@ nsLocalFile::CreateAllAncestors(uint32_t aPermissions)
 #ifdef DEBUG_NSIFILE
 fprintf(stderr, "nsIFile: mkdir(\"%s\")\n", buffer);
 #endif
-int mkdir_result = mkdir(buffer, aPermissions);
-int mkdir_errno = errno;
+mkdir_result = mkdir(buffer, aPermissions);
 if (mkdir_result == -1) {
+  mkdir_errno = errno;
   /*
* Always set |errno| to EEXIST if the dir already exists
* (we have to do this here since the errno value is not consistent
@@ -408,23 +410,22 @@ nsLocalFile::CreateAllAncestors(uint32_t aPermissions)
   }
 }

-/* Put the / back before we (maybe) return */
+/* Put the / back */
 *slashp = '/';
-
-/*
- * We could get EEXIST for an existing file -- not directory --
- * with the name of one of our ancestors, but that's OK: we'll get
- * ENOTDIR when we try to make the next component in the path,
- * either here on back in Create, and error out appropriately.
- */
-if (mkdir_result == -1 && mkdir_errno != EEXIST) {
-  return nsresultForErrno(mkdir_errno);
-}
   }

 #ifdef DEBUG_NSIFILE
   fprintf(stderr, "nsIFile: after: %s\n", buffer);
 #endif
+
+  /*
+   * We could get EEXIST for an existing file -- not directory --
+   * but that's OK: we'll get ENOTDIR when we try to make the final
+   * component of the path back in Create and error out appropriately.
+   */
+  if (mkdir_result == -1 && mkdir_errno != EEXIST) {
+return NS_ERROR_FAILURE;
+  }

   return NS_OK;
 }



Firefox fails to create profile's permanent storage

2021-03-04 Thread RJ Johnson
When creating a new profile (on first launch or with "firefox -P"),
Firefox fails to create the
"~/.mozilla/firefox//storage/permanent" folder.

I have observed this behavior with Firefox 85, 86, and 78esr, although
more versions are likely affected. This behavior was observed on a
machine running -current.

The two most obvious symptoms of this failure are the browser's Web
Developer tools showing no page source in the Inspector tab (non-esr)
and various error messages in the Browser Console relating to IndexedDB.

This failure is caused by unveil. When creating a profile, Firefox
begins checking each directory in the path
"/home//.mozilla/firefox//storage/permanent" for
existence (i.e., "/home" then "/home/" then ...). If any directory
in this chain does not exist, Firefox gives up on creating the
"permanent" folder. This is easily observed in a ktrace. (I did
"ktrace -id firefox -P". Search for "permanent".) Since Firefox has no
access to "/home" (despite having access to the profile folder), the
"permanent" folder is never created.

The easiest way to fix this issue, for profiles both new and old, is to
manually create the "permanent" folder after Firefox creates the profile
for you. Once this folder exists, Firefox seems to have no more issues.
It only has trouble creating this folder initially.

There are two other possible "fixes" involving unveil which I include
for completeness only. Both of these changes can be reverted after
running Firefox once with them applied. The first is to disable
"unveil.main". The second is to add:

/home r
~ r

to "unveil.main". This allows the directory existence checks to pass,
and Firefox happily creates the "permanent" folder.



[CentOS-virt] qemu-kvm-ev: usb: out-of-bounds r/w(CVE-2020-14364)

2021-03-03 Thread rj...@vip.qq.com
Hello
I saw that qemu-kvm-rhev has fixed the issue, but  CentOS community 
hasn't updated the repaired version of qemu-kvm-ev;
will it be fixed in the future?
thanks
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


wsmouse(4): user-defined touchpad tap button map

2021-02-28 Thread RJ Johnson
I recently posted a patch which allowed enabling/disabling touchpad tap
gestures [1]. After a lot of back and forth with bru@, I am now ready
to submit an updated version.

This version allows the user to map a tap gesture to a mouse button.
For mouse.tp.tapping, wsconsctl has been updated to accept a triple of
mouse buttons which corresponds to one-, two-, and three-finger tap
gestures, respectively. A tap gesture can be disabled by giving a value
of 0 for its mouse button. To not break existing configurations,
mouse.tp.tapping still accepts a single value and traslates it into a
button map which simulates the current mouse.tp.tapping behavior.

[1] https://marc.info/?l=openbsd-tech=161310033523558=2

Index: sbin/wsconsctl/mousecfg.c
===
RCS file: /cvs/src/sbin/wsconsctl/mousecfg.c,v
retrieving revision 1.7
diff -u -p -u -p -r1.7 mousecfg.c
--- sbin/wsconsctl/mousecfg.c   2 Apr 2020 17:17:04 -   1.7
+++ sbin/wsconsctl/mousecfg.c   26 Feb 2021 07:03:47 -
@@ -40,9 +40,9 @@
 #define TP_FILTER_FIRSTWSMOUSECFG_DX_MAX
 #define TP_FILTER_LAST WSMOUSECFG_SMOOTHING
 #define TP_FEATURES_FIRST  WSMOUSECFG_SOFTBUTTONS
-#define TP_FEATURES_LAST   WSMOUSECFG_TAPPING
+#define TP_FEATURES_LAST   WSMOUSECFG_DISABLE
 #define TP_SETUP_FIRST WSMOUSECFG_LEFT_EDGE
-#define TP_SETUP_LAST  WSMOUSECFG_TAP_LOCKTIME
+#define TP_SETUP_LAST  WSMOUSECFG_TAP_THREE_BTNMAP
 #define LOG_FIRST  WSMOUSECFG_LOG_INPUT
 #define LOG_LAST   WSMOUSECFG_LOG_EVENTS

@@ -71,8 +71,10 @@ static const int touchpad_types[] = {

 struct wsmouse_parameters cfg_tapping = {
(struct wsmouse_param[]) {
-   { WSMOUSECFG_TAPPING, 0 }, },
-   1
+   { WSMOUSECFG_TAP_ONE_BTNMAP, 0 },
+   { WSMOUSECFG_TAP_TWO_BTNMAP, 0 },
+   { WSMOUSECFG_TAP_THREE_BTNMAP, 0 }, },
+   3
 };

 struct wsmouse_parameters cfg_scaling = {
@@ -262,6 +264,30 @@ set_percent(struct wsmouse_parameters *f
 }

 static int
+set_tapping(struct wsmouse_parameters *field, char *tapping)
+{
+   int i1, i2, i3;
+
+   switch (sscanf(tapping, "%d,%d,%d", , , )) {
+   case 1:
+   if (i1 == 0) /* Disable */
+   i2 = i3 = i1;
+   else { /* Enable with defaults */
+   i1 = 1; /* Left click */
+   i2 = 3; /* Right click */
+   i3 = 2; /* Middle click */
+   }
+   /* FALLTHROUGH */
+   case 3:
+   set_value(field, WSMOUSECFG_TAP_ONE_BTNMAP, i1);
+   set_value(field, WSMOUSECFG_TAP_TWO_BTNMAP, i2);
+   set_value(field, WSMOUSECFG_TAP_THREE_BTNMAP, i3);
+   return (0);
+   }
+   return (-1);
+}
+
+static int
 set_edges(struct wsmouse_parameters *field, char *edges)
 {
float f1, f2, f3, f4;
@@ -359,6 +385,12 @@ mousecfg_rd_field(struct wsmouse_paramet
if (field == _param) {
if (read_param(field, val))
errx(1, "invalid input (param)");
+   return;
+   }
+
+   if (field == _tapping) {
+   if (set_tapping(field, val))
+   errx(1, "invalid input (tapping)");
return;
}

Index: share/man/man4/wsmouse.4
===
RCS file: /cvs/src/share/man/man4/wsmouse.4,v
retrieving revision 1.20
diff -u -p -u -p -r1.20 wsmouse.4
--- share/man/man4/wsmouse.44 Feb 2018 20:29:59 -   1.20
+++ share/man/man4/wsmouse.426 Feb 2021 07:03:46 -
@@ -26,7 +26,7 @@
 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
 .\" SUCH DAMAGE.
 .\"
-.Dd $Mdocdate: February 4 2018 $
+.Dd $Mdocdate: February 23 2021 $
 .Dt WSMOUSE 4
 .Os
 .Sh NAME
@@ -87,13 +87,24 @@ is omitted, commands apply to
 .Pa /dev/wsmouse0 .
 .Bl -tag -width Ds
 .It Cm mouse.tp.tapping
-Setting this parameter to a non-zero value enables tap gestures.
-Contacts on the touchpad that are immediately released again
-trigger click events.
-One-finger, two-finger, and three-finger taps generate left-button,
-right-button, and middle-button clicks, respectively.
-If, within a short time interval, a second touch follows a one-finger
-tap, the button-up event is not issued until that touch ends
+Contacts on the touchpad that are immediately released again can
+be mapped to mouse button clicks. This list of three parameters
+configures these mappings, in the order:
+.Bd -literal -offset indent
+.Sm off
+.Ar one-finger , two-finger , three-finger
+.Sm on
+.Ed
+.Pp
+Setting a parameter to a positive value enables that tap gesture
+and maps it to the given mouse button. To disable all three tap
+gestures at once, provide the single value of 0. Conversely, a
+single non-zero value will enable one-finger, two-finger, and
+three-finger tap gestures with their 

Re: wsmouse(4): allow independent touchpad tap gesture configuration

2021-02-16 Thread RJ Johnson
> Hi,
>
> I wouldn't mind to see such a feature in wsmouse, but there are two
> things that I don't like about the patch in this form: 1) It breaks
> existing configurations, ...

I know sometimes breaking changes are made in the name of forward
progress, and I wasn't sure if this was one of those times.

> ... and 2) it may bother the users who don't care about such an
> extension - probably the majority of those who enable tapping.
>
> A better approach might be to extend wsconsctl in such a way that
> it accepts both a single value as well as a triple of values as
> input, and prints them in alternating formats, depending on whether
> all values are the same or not.

While I handle single value assignment, I defer on printing. I worked
on it some, but it turned into a real mess. I also don't see much harm
in printing the full triple. Users don't see it except when it is
assigned during boot and when using wsconsctl, so it shouldn't be an
annoyance. And it may also serve as an indicator that something has
changed with regards to touchpad features, prompting investigation.

> (Moreover, if we add special handlers for the 'tapping' field in
> wsconsctrl, then it's possible to use the WSMOUSECFG_TAPPING
> parameter as a bit field, which might be more adequate here.)

This would require a few changes on the kernel side. Right now,
WSTPAD_TAPPING is a single bit within wstpad.features, which is why I
added two more flags. To store the configuration, adding a "tapping"
field to wstpad.params makes the most sense (to me). It wouldn't be too
much work to transition to this layout, and this has the advantage of
keeping all of the tapping configuration parameters together in one
place instead of as three feature flags. I assume that the user would
still see a triple and not the underlying integer representation when
configuring the touchpad via wsconsctl.

Below you will find my updated diff, which adds single value assignment
to wsconsctl for mouse.tp.tapping. I have also updated the man page to
reflect this change. However, it sounds like more changes are in store,
so I will wait for your feedback on what to implement.

Index: sbin/wsconsctl/mousecfg.c
===
RCS file: /cvs/src/sbin/wsconsctl/mousecfg.c,v
retrieving revision 1.7
diff -u -p -u -p -r1.7 mousecfg.c
--- sbin/wsconsctl/mousecfg.c   2 Apr 2020 17:17:04 -   1.7
+++ sbin/wsconsctl/mousecfg.c   16 Feb 2021 08:58:27 -
@@ -40,7 +40,7 @@
 #define TP_FILTER_FIRSTWSMOUSECFG_DX_MAX
 #define TP_FILTER_LAST WSMOUSECFG_SMOOTHING
 #define TP_FEATURES_FIRST  WSMOUSECFG_SOFTBUTTONS
-#define TP_FEATURES_LAST   WSMOUSECFG_TAPPING
+#define TP_FEATURES_LAST   WSMOUSECFG_THREEFINGERTAPPING
 #define TP_SETUP_FIRST WSMOUSECFG_LEFT_EDGE
 #define TP_SETUP_LAST  WSMOUSECFG_TAP_LOCKTIME
 #define LOG_FIRST  WSMOUSECFG_LOG_INPUT
@@ -71,8 +71,10 @@ static const int touchpad_types[] = {

 struct wsmouse_parameters cfg_tapping = {
(struct wsmouse_param[]) {
-   { WSMOUSECFG_TAPPING, 0 }, },
-   1
+   { WSMOUSECFG_ONEFINGERTAPPING, 0 },
+   { WSMOUSECFG_TWOFINGERTAPPING, 0 },
+   { WSMOUSECFG_THREEFINGERTAPPING, 0 }, },
+   3
 };

 struct wsmouse_parameters cfg_scaling = {
@@ -262,6 +264,24 @@ set_percent(struct wsmouse_parameters *f
 }

 static int
+set_tapping(struct wsmouse_parameters *field, char *tapping)
+{
+   int i1, i2, i3;
+
+   switch (sscanf(tapping, "%d,%d,%d", , , )) {
+   case 1:
+   i2 = i3 = i1;
+   /* FALLTHROUGH */
+   case 3:
+   set_value(field, WSMOUSECFG_ONEFINGERTAPPING, i1);
+   set_value(field, WSMOUSECFG_TWOFINGERTAPPING, i2);
+   set_value(field, WSMOUSECFG_THREEFINGERTAPPING, i3);
+   return (0);
+   }
+   return (-1);
+}
+
+static int
 set_edges(struct wsmouse_parameters *field, char *edges)
 {
float f1, f2, f3, f4;
@@ -359,6 +379,12 @@ mousecfg_rd_field(struct wsmouse_paramet
if (field == _param) {
if (read_param(field, val))
errx(1, "invalid input (param)");
+   return;
+   }
+
+   if (field == _tapping) {
+   if (set_tapping(field, val))
+   errx(1, "invalid input (tapping)");
return;
}

Index: share/man/man4/wsmouse.4
===
RCS file: /cvs/src/share/man/man4/wsmouse.4,v
retrieving revision 1.20
diff -u -p -u -p -r1.20 wsmouse.4
--- share/man/man4/wsmouse.44 Feb 2018 20:29:59 -   1.20
+++ share/man/man4/wsmouse.416 Feb 2021 08:58:27 -
@@ -26,7 +26,7 @@
 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
 .\" SUCH DAMAGE.
 .\"
-.Dd $Mdocdate: February 4 2018 $
+.Dd $Mdocdate: February 15 2021 $
 .Dt WSMOUSE 4
 .Os
 .Sh NAME
@@ -87,14 +87,23 

wsmouse(4): allow independent touchpad tap gesture configuration

2021-02-11 Thread RJ Johnson
Right now, there is a single master toggle for enabling all touchpad tap
gestures (mouse.tp.tapping). This patch splits this parameter into three
for independent control of one-finger, two-finger, and three-finger tap
gestures. This allows users to mix and match which gestures they prefer
for the ideal touchpad experience.

Index: sbin/wsconsctl/mousecfg.c
===
RCS file: /cvs/src/sbin/wsconsctl/mousecfg.c,v
retrieving revision 1.7
diff -u -p -u -p -r1.7 mousecfg.c
--- sbin/wsconsctl/mousecfg.c   2 Apr 2020 17:17:04 -   1.7
+++ sbin/wsconsctl/mousecfg.c   9 Feb 2021 00:17:14 -
@@ -40,7 +40,7 @@
 #define TP_FILTER_FIRSTWSMOUSECFG_DX_MAX
 #define TP_FILTER_LAST WSMOUSECFG_SMOOTHING
 #define TP_FEATURES_FIRST  WSMOUSECFG_SOFTBUTTONS
-#define TP_FEATURES_LAST   WSMOUSECFG_TAPPING
+#define TP_FEATURES_LAST   WSMOUSECFG_THREEFINGERTAPPING
 #define TP_SETUP_FIRST WSMOUSECFG_LEFT_EDGE
 #define TP_SETUP_LAST  WSMOUSECFG_TAP_LOCKTIME
 #define LOG_FIRST  WSMOUSECFG_LOG_INPUT
@@ -71,8 +71,10 @@ static const int touchpad_types[] = {

 struct wsmouse_parameters cfg_tapping = {
(struct wsmouse_param[]) {
-   { WSMOUSECFG_TAPPING, 0 }, },
-   1
+   { WSMOUSECFG_ONEFINGERTAPPING, 0 },
+   { WSMOUSECFG_TWOFINGERTAPPING, 0 },
+   { WSMOUSECFG_THREEFINGERTAPPING, 0 }, },
+   3
 };

 struct wsmouse_parameters cfg_scaling = {
Index: share/man/man4/wsmouse.4
===
RCS file: /cvs/src/share/man/man4/wsmouse.4,v
retrieving revision 1.20
diff -u -p -u -p -r1.20 wsmouse.4
--- share/man/man4/wsmouse.44 Feb 2018 20:29:59 -   1.20
+++ share/man/man4/wsmouse.49 Feb 2021 00:17:14 -
@@ -26,7 +26,7 @@
 .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
 .\" SUCH DAMAGE.
 .\"
-.Dd $Mdocdate: February 4 2018 $
+.Dd $Mdocdate: February 8 2021 $
 .Dt WSMOUSE 4
 .Os
 .Sh NAME
@@ -87,14 +87,22 @@ is omitted, commands apply to
 .Pa /dev/wsmouse0 .
 .Bl -tag -width Ds
 .It Cm mouse.tp.tapping
-Setting this parameter to a non-zero value enables tap gestures.
+This list of three parameters sets the enabled tap gestures, in the order:
+.Bd -literal -offset indent
+.Sm off
+.Ar one-finger , two-finger , three-finger
+.Sm on
+.Ed
+.Pp
+Setting a parameter to a non-zero value enables that tap gesture.
 Contacts on the touchpad that are immediately released again
-trigger click events.
+trigger click events only if the corresponding tap gesture is enabled.
 One-finger, two-finger, and three-finger taps generate left-button,
 right-button, and middle-button clicks, respectively.
 If, within a short time interval, a second touch follows a one-finger
 tap, the button-up event is not issued until that touch ends
 .Pq Dq tap-and-drag .
+This requires the one-finger tap gesture to be enabled.
 .It Cm mouse.tp.scaling
 The value is a scale coefficient that is applied to the relative
 coordinates.
Index: sys/dev/wscons/wsconsio.h
===
RCS file: /cvs/src/sys/dev/wscons/wsconsio.h,v
retrieving revision 1.95
diff -u -p -u -p -r1.95 wsconsio.h
--- sys/dev/wscons/wsconsio.h   1 Oct 2020 17:28:14 -   1.95
+++ sys/dev/wscons/wsconsio.h   9 Feb 2021 00:17:17 -
@@ -319,7 +319,9 @@ enum wsmousecfg {
WSMOUSECFG_SWAPSIDES,   /* invert soft-button/scroll areas */
WSMOUSECFG_DISABLE, /* disable all output except for
   clicks in the top-button area */
-   WSMOUSECFG_TAPPING, /* enable tapping */
+   WSMOUSECFG_ONEFINGERTAPPING,/* enable one-finger tapping */
+   WSMOUSECFG_TWOFINGERTAPPING,/* enable two-finger tapping */
+   WSMOUSECFG_THREEFINGERTAPPING,  /* enable three-finger tapping */

/*
 * Touchpad options
@@ -345,7 +347,7 @@ enum wsmousecfg {
WSMOUSECFG_LOG_INPUT = 256,
WSMOUSECFG_LOG_EVENTS,
 };
-#define WSMOUSECFG_MAX 39  /* max size of param array per ioctl */
+#define WSMOUSECFG_MAX 41  /* max size of param array per ioctl */

 struct wsmouse_param {
enum wsmousecfg key;
Index: sys/dev/wscons/wstpad.c
===
RCS file: /cvs/src/sys/dev/wscons/wstpad.c,v
retrieving revision 1.26
diff -u -p -u -p -r1.26 wstpad.c
--- sys/dev/wscons/wstpad.c 13 Sep 2020 10:05:46 -  1.26
+++ sys/dev/wscons/wstpad.c 9 Feb 2021 00:17:17 -
@@ -139,18 +139,19 @@ struct tpad_touch {
 /*
  * wstpad.features
  */
-#define WSTPAD_SOFTBUTTONS (1 << 0)
-#define WSTPAD_SOFTMBTN(1 << 1)
-#define WSTPAD_TOPBUTTONS  (1 << 2)
-#define WSTPAD_TWOFINGERSCROLL (1 << 3)
-#define WSTPAD_EDGESCROLL  (1 << 4)
-#define WSTPAD_HORIZSCROLL (1 << 5)
-#define WSTPAD_SWAPSIDES   (1 

[Yahoo-eng-team] [Bug 1914857] [NEW] AttributeError: 'NoneType' object has no attribute 'db_find_rows'

2021-02-06 Thread rj
Public bug reported:

(neutron-server)[root@localhost /]# rpm -qa|egrep 'ovs|neutron'
python3-ovsdbapp-1.6.0-1.el8.noarch
openstack-neutron-common-17.0.0-1.el8.noarch
openstack-neutron-vpnaas-17.0.0-2.el8.noarch
python3-neutronclient-7.2.1-2.el8.noarch
python3-neutron-17.0.0-1.el8.noarch
python3-neutron-vpnaas-17.0.0-2.el8.noarch
openstack-neutron-17.0.0-1.el8.noarch
openstack-neutron-ml2-17.0.0-1.el8.noarch
python3-neutron-lib-2.6.1-2.el8.noarch
python3-neutron-dynamic-routing-17.0.0-2.el8.noarch


I found some error log in neutron-server.log:
2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event [-] Unexpected exception in 
notify_loop: AttributeError: 'NoneType' object has no attribute 'db_find_rows'
2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event Traceback (most recent call 
last):
2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event   File 
"/usr/lib/python3.6/site-packages/ovsdbapp/event.py", line 159, in notify_loop
2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event match.run(event, row, 
updates)
2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event   File 
"/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py",
 line 347, in run
2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event 
self.driver.delete_mac_binding_entries(row.external_ip)
2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event   File 
"/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 1010, in delete_mac_binding_entries
2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event mac_binds = 
self._sb_ovn.db_find_rows(
2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event AttributeError: 'NoneType' 
object has no attribute 'db_find_rows'
2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event 
2021-02-06 16:27:27.558 24 ERROR ovsdbapp.event [-] Unexpected exception in 
notify_loop: AttributeError: 'NoneType' object has no attribute 'db_find_rows'
2021-02-06 16:27:27.558 24 ERROR ovsdbapp.event Traceback (most recent call 
last):
2021-02-06 16:27:27.558 24 ERROR ovsdbapp.event   File 
"/usr/lib/python3.6/site-packages/ovsdbapp/event.py", line 159, in notify_loop
2021-02-06 16:27:27.558 24 ERROR ovsdbapp.event match.run(event, row, 
updates)
2021-02-06 16:27:27.558 24 ERROR ovsdbapp.event   File 
"/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py",
 line 347, in run
2021-02-06 16:27:27.558 24 ERROR ovsdbapp.event 
self.driver.delete_mac_binding_entries(row.external_ip)
2021-02-06 16:27:27.558 24 ERROR ovsdbapp.event   File 
"/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 1010, in delete_mac_binding_entries
2021-02-06 16:27:27.558 24 ERROR ovsdbapp.event mac_binds = 
self._sb_ovn.db_find_rows(
2021-02-06 16:27:27.558 24 ERROR ovsdbapp.event AttributeError: 'NoneType' 
object has no attribute 'db_find_rows'
2021-02-06 16:27:27.558 24 ERROR ovsdbapp.event 

how to fix this?

** Affects: networking-ovn
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-server ovn

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1914857

Title:
  AttributeError: 'NoneType' object has no attribute 'db_find_rows'

Status in networking-ovn:
  New
Status in neutron:
  New

Bug description:
  (neutron-server)[root@localhost /]# rpm -qa|egrep 'ovs|neutron'
  python3-ovsdbapp-1.6.0-1.el8.noarch
  openstack-neutron-common-17.0.0-1.el8.noarch
  openstack-neutron-vpnaas-17.0.0-2.el8.noarch
  python3-neutronclient-7.2.1-2.el8.noarch
  python3-neutron-17.0.0-1.el8.noarch
  python3-neutron-vpnaas-17.0.0-2.el8.noarch
  openstack-neutron-17.0.0-1.el8.noarch
  openstack-neutron-ml2-17.0.0-1.el8.noarch
  python3-neutron-lib-2.6.1-2.el8.noarch
  python3-neutron-dynamic-routing-17.0.0-2.el8.noarch

  
  I found some error log in neutron-server.log:
  2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event [-] Unexpected exception in 
notify_loop: AttributeError: 'NoneType' object has no attribute 'db_find_rows'
  2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event Traceback (most recent call 
last):
  2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event   File 
"/usr/lib/python3.6/site-packages/ovsdbapp/event.py", line 159, in notify_loop
  2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event match.run(event, row, 
updates)
  2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event   File 
"/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py",
 line 347, in run
  2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event 
self.driver.delete_mac_binding_entries(row.external_ip)
  2021-02-06 16:27:27.557 24 ERROR ovsdbapp.event   File 
"/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 1010, in 

Re: Entity ChildWorkEffort

2020-12-17 Thread Jason RJ

Hey Schumann,

I think there's some logic that makes this happen.

In the workeffort\entitydef\entitymodel_view.xml we have a Parent defintion:

  rel-entity-name="WorkEffort">
    rel-field-name="workEffortId"/>

  

The code in ModelReader.java takes care of creating the reverse relation 
for Child automatically:


   // create the new relationship even if one exists so we can show 
what we are looking for in the info message
   // don't do relationship to the same entity, unless title is 
"Parent", then do a "Child" automatically

   String title = modelRelation.getTitle();
   if 
(curModelEntity.getEntityName().equals(relatedEnt.getEntityName()) && 
"Parent".equals(title)) {

   title = "Child";
   }

Hope that helps,

Jason


On 17/12/2020 06:10, Schumann Ye wrote:

Dear all,

Does anyone have any idea where the Entity ChildWorkEffort comes from ( in what 
xml file it is defined ).

This question came across to me when I checked the file ProductionRun.java with 
the codes as follows:
productionRunRoutingTasks =
productionRun.getRelated("ChildWorkEffort", .

Then I search of the definition file with the title equal to "Child" and 
rel-entity-name="WorkEffort" but could NOT find any match.

Can anyone help?

Mvh
Schumann


Get Outlook for Android



[marble] [Bug 362429] crash when tick satellites in satellite configure window

2020-12-04 Thread RJ
https://bugs.kde.org/show_bug.cgi?id=362429

--- Comment #2 from RJ  ---
Yes. Just checked this in Fedora 32 (KDE Applications 19.12.2)

Some more information to reproduce it:
1. Check Satellite plugin
2. Add some(5-7) to marble
3. Delete some from
4. Close plugin
5. Repeat 1-5 for a while

Crash, but now Dr. Konqi do not "see" the crash but ABRT sees.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: ecommerce - custom categories menu item in header

2020-11-16 Thread Jason RJ

Hi Mike,

Take a look at ecommerce/template/catalog/SideDeepCategory.ftl there's a 
macro there that has everything you need.


We adapted that to build our menus for us, the wrappers are attached to 
the context in the related groovy files.


Hope that helps.

Jason

On 16/11/2020 11:34, mike Butler wrote:

I am customising the main decorator for ecommerce and  have a Header.ftl which 
includes dropdown menus and I am working on a dropdown menu for categories.

What I have tried so far:
For the categories menu  I have included  ProductCategories.groovy as an action 
in the main-decorator (as used in the categories in the left panel of the main 
div) which I believe should provide a hash: “productCategoryID” which I can use 
in the Header.ftl.

Can you please confirm that is correct.

 From reading the ftl documentation I think I have to  use an ftl 
object-wrapper  but without an example I do not understand how to code that.


I have used <#list productCategoryId?keys as root> which is just a guess but it 
does produce a very nice menu with dropdown containing fifteen “productCategory” 
entries that’s “productCategory”  15 times  so at least I know something is happening.

I would also like to understand what and how I can display from the hash. 
Current test coding is below:

Any pointers/help with this menu would be a very much appreciated learning 
experience for me.

Regards
Mike Butler
Freelance Consultant

<#--Some sort of object wrapping please help -->


   <#if (productCategoryId?has_content)>
 <#list productCategoryId?keys as root>   --- What would be the correct 
syntax?
   
 $ 
{uiLabelMap.productCategory}

 
   <#else>
 
   ${uiLabelMap.Category}
 
 
   ${uiLabelMap.Product}
 
   












Re: Setup issues

2020-10-26 Thread Jason RJ

Hi Dominic,

Province data and similar locality data is part of the seed data and 
loaded from the /framework/common/data/Geo*.xml files you should be able 
to spot that being loaded in your initial setup log output. It's been a 
while since I've spun up a clean instance, but the data is certainly 
there so it's just not being loaded for some reason.


Hope that helps,

Jason


On 26/10/2020 16:55, Dominic Amann wrote:

I am attempting to setup OFBiz for my business.

I am a Linux developer of many years experience.

I wanted a secure (SSL) setup, and I thought I would use Mariadb as I
already use that for other software, and I would avoid duplication with
another database. I stretched this further by wanting to have no demo data.

I found it very difficult to accomplish all this. Much more difficult than
I might have expected for a major apache project. However, I persevered,
and after a whole weekend of work, I got it working to a first order. I
have documented each step I took.

HOWEVER:

I CANNOT ADD A PARTY MAILING ADDRESS. There are no provinces listed for
Canada. There are no states listed for the USA. I don't know other
countries intimately, so I can't speak for them. As a result of being
incomplete, I cannot add addresses at all.

I initially thought this must be because I did not use the demo data. So, I
repeated my work, but left the demo data in place. SAME PROBLEM.

Then I thought that perhaps I should do the migration (to Mysql) AFTER
setting up the initial data. That was BROKEN and wouldn't complete on
import, and the resulting website wouldn't display properly.

Then I thought it might be the database - perhaps it doesn't work with
mysql. I would just do the straightforward install, and work with that.
SAME PROBLEM.

So here we are: 3 full days and I can't enter my first employee, even with
the simplest install with default settings.  Of course there is one
fundamental lesson here: never just believe that something works just
because it is from a reputable project that has been out there for years. I
should have known better.



Re: [FFmpeg-user] How to insert Static Frames in a video for a specific time period?

2020-09-01 Thread RJ
I have got this command to create the static video(with null audio).

ffmpeg -loop 1 -i static_img.jpg -f lavfi -i
anullsrc=channel_layout=5.1:sample_rate=48000 -t 10 -c:v libx264 -pix_fmt
yuv420p -vf scale=480:320 -y output.ts

But I want to create this kind of static video from the specific time
period.



--
Sent from: http://www.ffmpeg-archive.org/
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: Adding Attributes to a Product

2020-08-12 Thread Jason RJ

Randy,

It might require some light coding and config.

You need to uncomment this section in ecommerce\widget\CatalogScreens.xml:

  

You might need to create markup for your filters in 
ecommerce\template\catalog\LayeredNavBar.ftl here for example the 
default color and price range filters:


  <#if showColors>
    
${colorFeatureType.description}
  
    <#list colors as color>
  
    href="<@ofbizUrl>category/~category_id=${productCategoryId}?pft_${color.productFeatureTypeId}=${color.productFeatureId}clearSearch=N<#if 
currentSearchCategory??>searchCategoryId=${currentSearchCategory.productCategoryId}">

 ${color.description} (${color.featureCount})
   
  
    
  
    
  
  <#if showPriceRange>
    
${uiLabelMap.EcommercePriceRange}
  
    <#list priceRangeList as priceRange>
  
    href="<@ofbizUrl>category/~category_id=${productCategoryId}?LIST_PRICE_LOW=${priceRange.low}LIST_PRICE_HIGH=${priceRange.high}clearSearch=N<#if 
currentSearchCategory??>searchCategoryId=${currentSearchCategory.productCategoryId}">
 <@ofbizCurrency amount=priceRange.low /> - 
<@ofbizCurrency amount=priceRange.high /> (${priceRange.count})

   
 
   
 
   
 

The variables showColors, colors, showPriceRange, priceRangeList are 
prepared in ecommerce\groovyScripts\catalog\LayeredNavigation.groovy so 
you can create your own here.


Like I said, some light coding required.

Jason


On 11/08/2020 21:20, Randy Evans wrote:

That sounds very interesting.

Can you tell me how to enable layered navigation?  There doesn't seem
to be much information available about that.

Thanks.




On 8/11/20, Rishi Solanki  wrote:

Numerice range won't work as you suspect, and for search not need to do
that. You can simply tag a feature with "> 1000" and "< 1000" as string.
You can use category or feature or even atrribute. And once search is
enable of that feature then you simply need show the tagged products. Which
can be done by all routes, the catalog manager or product creation code
needs to make sure all products tagged properly.

In this way no custom code would be required. HTH!

Best Regards,
--
Rishi Solanki
*CTO, Mindpath Technology*
Intelligent Solutions
cell: +91-98932-87847
LinkedIn <https://www.linkedin.com/in/rishi-solanki-62271b7/>


On Tue, Aug 11, 2020 at 6:33 PM Jason RJ  wrote:


Hi Randy

We have done something similar but using Product Features to drive the
dropdown and Product Variants for each type, this supports search as
expected too since features are added to search criteria as product
keywords. Turning on layered navigation and creating custom filters in
LayeredNavBar.ftl gives you a filter list, it might be possible to build
a range filter that way.

Jason


[Bug 1880388] Re: rpi3b+ becomes unresponsive after closing a program

2020-08-12 Thread RJ
Just reiterating what I mentioned in the Github issue, that the test
kernel works great. Thanks.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1880388

Title:
  rpi3b+ becomes unresponsive after closing a program

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1880388/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: Adding Attributes to a Product

2020-08-11 Thread Jason RJ

Hi Randy

We have done something similar but using Product Features to drive the 
dropdown and Product Variants for each type, this supports search as 
expected too since features are added to search criteria as product 
keywords. Turning on layered navigation and creating custom filters in 
LayeredNavBar.ftl gives you a filter list, it might be possible to build 
a range filter that way.


Jason


On 10/08/2020 15:15, Jacques Le Roux wrote:

Hi Randy,

I think this is what you are looking for 
https://markmail.org/message/c5kxv6snztxwqwxk


HTH

Jacques

Le 07/08/2020 à 22:29, Randy Evans a écrit :

I am evaluating OFBiz and have set up a test instance.

I am trying to enter a product that I need to add some data, one of
which is numeric and the other should be selectable from a dropdown
list.

An example of this would be resistors which have a footprint and would
be selectable from a dropdown list like "throughole","0603", "0402"

The other data would be keyed in as a numeric value (the resistance).

I would need to be able to search for a specific product like
"throughole" and a range for the resistance like ">1000 and <1"
for example.  In "Find Product" there is an advanced search but I'm
not sure you can search for Attributes.

It looks like CATALOG MANAGER/Products/Attributes may be what I am
looking for but it is not clear if I can specify a numeric value only
and also if I can set up a dropdown list.



Is this type of entry and search possible with OFBiz (without custom
programming)?

Thanks for any information.




[Bug 1880388] Re: rpi3b+ becomes unresponsive after closing a program

2020-08-10 Thread RJ
@juergh, sorry I didn't get an email notification for what ever reason.
I have loaded this onto a board and will try tomorrow and let you know! Thanks.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1880388

Title:
  rpi3b+ becomes unresponsive after closing a program

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1880388/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1880388] Re: rpi3b+ becomes unresponsive after closing a program

2020-07-16 Thread RJ
Just want to add that I also had no problems with Raspberry Pi OS 64b
beta, and is what I am using at the moment.

Thanks Juerg.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1880388

Title:
  rpi3b+ becomes unresponsive after closing a program

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1880388/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[jira] [Commented] (ARROW-9453) [Rust] Compiling Rust libary against WASM32 library

2020-07-14 Thread RJ Atwal (Jira)


[ 
https://issues.apache.org/jira/browse/ARROW-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157435#comment-17157435
 ] 

RJ Atwal commented on ARROW-9453:
-

[~andygrove]  To answer your questions:
1. Wasm code would be running in the same process as Spark (unfortunately their 
will be a JNI wall between the two execution areas though). 
2. I want to use arrow as the format to pass data (references) between Java 
land and the WASM (native) execution. So, I need libraries to handle arrow on 
both environments.

> [Rust] Compiling Rust libary against WASM32 library
> ---
>
> Key: ARROW-9453
> URL: https://issues.apache.org/jira/browse/ARROW-9453
> Project: Apache Arrow
>  Issue Type: New Feature
>  Components: Rust, Rust - DataFusion
>Affects Versions: 1.0.0
>Reporter: RJ Atwal
>Priority: Major
>
> I am hoping to support arch_target=Wasm32 as a compilation target for the 
> rust arrow & datafusion packages.
> My plan is to add compiler conditionals around any I/O features and libc 
> dependent features of these two libraries.
> My intent is to be able to use the apache arrow libraries in UDF style WASM 
> functions which pass arrow memory references between the host (spark 
> environment) and the WASM code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[Discuss] [Rust] Looking to add Wasm32 compile target for rust library

2020-07-13 Thread RJ Atwal
 Hi all,

Looking for guidance on how to submit a design and PR to add WASM32 support
to apache arrow's rust libraries.

I am looking to use the arrow library to pass data in arrow format between
the host spark environment and UDFs defined in WASM .

I created the following JIRA ticket to capture the work
https://issues.apache.org/jira/browse/ARROW-9453

Thanks,
RJ


[jira] [Created] (ARROW-9453) Compiling Rust libary against WASM32 library

2020-07-13 Thread RJ Atwal (Jira)
RJ Atwal created ARROW-9453:
---

 Summary: Compiling Rust libary against WASM32 library
 Key: ARROW-9453
 URL: https://issues.apache.org/jira/browse/ARROW-9453
 Project: Apache Arrow
  Issue Type: New Feature
  Components: Rust, Rust - DataFusion
Affects Versions: 1.0.0
Reporter: RJ Atwal
 Fix For: 1.0.0


I am hoping to support arch_target=Wasm32 as a compilation target for the rust 
arrow & datafusion packages.

My plan is to add compiler conditionals around any I/O features and libc 
dependent features of these two libraries.

My intent is to be able to use the apache arrow libraries in UDF style WASM 
functions which pass arrow memory references between the host (spark 
environment) and the WASM code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARROW-9453) Compiling Rust libary against WASM32 library

2020-07-13 Thread RJ Atwal (Jira)
RJ Atwal created ARROW-9453:
---

 Summary: Compiling Rust libary against WASM32 library
 Key: ARROW-9453
 URL: https://issues.apache.org/jira/browse/ARROW-9453
 Project: Apache Arrow
  Issue Type: New Feature
  Components: Rust, Rust - DataFusion
Affects Versions: 1.0.0
Reporter: RJ Atwal
 Fix For: 1.0.0


I am hoping to support arch_target=Wasm32 as a compilation target for the rust 
arrow & datafusion packages.

My plan is to add compiler conditionals around any I/O features and libc 
dependent features of these two libraries.

My intent is to be able to use the apache arrow libraries in UDF style WASM 
functions which pass arrow memory references between the host (spark 
environment) and the WASM code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Bug#963706: kdenlive with ffmpeg version 7:4.3-2 can't Render Project to MP4

2020-07-03 Thread Thom RJ
I have new "h264_qsv" in decoders/encoders sections with ffmpeg version
4.3-2
See full output log in attachments.

пт, 3 июл. 2020 г. в 16:42, Patrick Matthäi :

> Hi
>
> Am 25.06.2020 um 18:10 schrieb Thom:
> > Package: kdenlive
> > Version: 20.04.2-1
> > Severity: normal
> >
> > Dear Maintainer,
> >
> > 1. launch kdenlive
> > 2. add movie clip to the new project
> > 3. move clip to timeline
> > 4. Press Render
> > 5. on the Render Project tab try to select "MP4 - the dominating format
> (H264/AAC)" and get the message "Unsupported video codec: libx264"
> >
> > Rollback to ffmpeg 7:4.2.2-1 solves the problem for me.
> >
> >
>
> What is your output from "ffmpeg -codecs|grep x264" with 7:4.3-2 and
> 7:4.2.2-1?
>
> @Debian Multimedia team:
> Has there changed something?
>
> --
> /*
> Mit freundlichem Gruß / With kind regards,
>  Patrick Matthäi
>  GNU/Linux Debian Developer
>
>   Blog: https://www.linux-dev.org/
> E-Mail: pmatth...@debian.org
> patr...@linux-dev.org
> */
>
>
$ ffmpeg -codecs | grep x264
ffmpeg version 4.3-2 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9 (Debian 9.3.0-13)
  configuration: --prefix=/usr --extra-version=2 --toolchain=hardened 
--libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu 
--arch=amd64 --enable-gpl --disable-stripping --enable-avresample 
--disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom 
--enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca 
--enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig 
--enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm 
--enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg 
--enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq 
--enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy 
--enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh 
--enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis 
--enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 
--enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 
--enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 
--enable-pocketsphinx --enable-libmfx --enable-libdc1394 --enable-libdrm 
--enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 
--enable-shared
  WARNING: library configuration mismatch
  avcodec configuration: --prefix=/usr --extra-version=2 
--toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu 
--incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl 
--disable-stripping --enable-avresample --disable-filter=resample 
--enable-gnutls --enable-ladspa --enable-libaom --enable-libass 
--enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio 
--enable-libcodec2 --enable-libflite --enable-libfontconfig 
--enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm 
--enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg 
--enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq 
--enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy 
--enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh 
--enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis 
--enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 
--enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 
--enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 
--enable-pocketsphinx --enable-libmfx --enable-libdc1394 --enable-libdrm 
--enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 
--enable-shared --enable-version3 --disable-doc --disable-programs 
--enable-libaribb24 --enable-liblensfun --enable-libopencore_amrnb 
--enable-libopencore_amrwb --enable-libtesseract --enable-libvo_amrwbenc
  avfilterconfiguration: --prefix=/usr --extra-version=2 
--toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu 
--incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl 
--disable-stripping --enable-avresample --disable-filter=resample 
--enable-gnutls --enable-ladspa --enable-libaom --enable-libass 
--enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio 
--enable-libcodec2 --enable-libflite --enable-libfontconfig 
--enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm 
--enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg 
--enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq 
--enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy 
--enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh 
--enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis 
--enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 
--enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 

Bug#963706: kdenlive with ffmpeg version 7:4.3-2 can't Render Project to MP4

2020-07-03 Thread Thom RJ
I have new "h264_qsv" in decoders/encoders sections with ffmpeg version
4.3-2
See full output log in attachments.

пт, 3 июл. 2020 г. в 16:42, Patrick Matthäi :

> Hi
>
> Am 25.06.2020 um 18:10 schrieb Thom:
> > Package: kdenlive
> > Version: 20.04.2-1
> > Severity: normal
> >
> > Dear Maintainer,
> >
> > 1. launch kdenlive
> > 2. add movie clip to the new project
> > 3. move clip to timeline
> > 4. Press Render
> > 5. on the Render Project tab try to select "MP4 - the dominating format
> (H264/AAC)" and get the message "Unsupported video codec: libx264"
> >
> > Rollback to ffmpeg 7:4.2.2-1 solves the problem for me.
> >
> >
>
> What is your output from "ffmpeg -codecs|grep x264" with 7:4.3-2 and
> 7:4.2.2-1?
>
> @Debian Multimedia team:
> Has there changed something?
>
> --
> /*
> Mit freundlichem Gruß / With kind regards,
>  Patrick Matthäi
>  GNU/Linux Debian Developer
>
>   Blog: https://www.linux-dev.org/
> E-Mail: pmatth...@debian.org
> patr...@linux-dev.org
> */
>
>
$ ffmpeg -codecs | grep x264
ffmpeg version 4.3-2 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9 (Debian 9.3.0-13)
  configuration: --prefix=/usr --extra-version=2 --toolchain=hardened 
--libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu 
--arch=amd64 --enable-gpl --disable-stripping --enable-avresample 
--disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom 
--enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca 
--enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig 
--enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm 
--enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg 
--enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq 
--enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy 
--enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh 
--enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis 
--enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 
--enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 
--enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 
--enable-pocketsphinx --enable-libmfx --enable-libdc1394 --enable-libdrm 
--enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 
--enable-shared
  WARNING: library configuration mismatch
  avcodec configuration: --prefix=/usr --extra-version=2 
--toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu 
--incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl 
--disable-stripping --enable-avresample --disable-filter=resample 
--enable-gnutls --enable-ladspa --enable-libaom --enable-libass 
--enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio 
--enable-libcodec2 --enable-libflite --enable-libfontconfig 
--enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm 
--enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg 
--enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq 
--enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy 
--enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh 
--enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis 
--enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 
--enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 
--enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 
--enable-pocketsphinx --enable-libmfx --enable-libdc1394 --enable-libdrm 
--enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 
--enable-shared --enable-version3 --disable-doc --disable-programs 
--enable-libaribb24 --enable-liblensfun --enable-libopencore_amrnb 
--enable-libopencore_amrwb --enable-libtesseract --enable-libvo_amrwbenc
  avfilterconfiguration: --prefix=/usr --extra-version=2 
--toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu 
--incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl 
--disable-stripping --enable-avresample --disable-filter=resample 
--enable-gnutls --enable-ladspa --enable-libaom --enable-libass 
--enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio 
--enable-libcodec2 --enable-libflite --enable-libfontconfig 
--enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm 
--enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg 
--enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq 
--enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy 
--enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh 
--enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis 
--enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 
--enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 

[jira] [Commented] (ATLAS-3852) Entity Bulk Create with unique reference

2020-07-01 Thread Vasanth kumar RJ (Jira)


[ 
https://issues.apache.org/jira/browse/ATLAS-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149651#comment-17149651
 ] 

Vasanth kumar RJ commented on ATLAS-3852:
-

[~madhan] I shared my use case and patch got test cases. Can you guide me 
whether this patch is useful or not? 

> Entity Bulk Create with unique reference
> 
>
> Key: ATLAS-3852
> URL: https://issues.apache.org/jira/browse/ATLAS-3852
> Project: Atlas
>  Issue Type: Improvement
>Reporter: Vasanth kumar RJ
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: ATLAS-3852.patch, BulkPostCreate.json
>
>
> Entities created in bulk and unique referenced in same request. Bulk create 
> DB, Table and Column referenced in a request itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ATLAS-3852) Entity Bulk Create with unique reference

2020-06-25 Thread Vasanth kumar RJ (Jira)


[ 
https://issues.apache.org/jira/browse/ATLAS-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145755#comment-17145755
 ] 

Vasanth kumar RJ commented on ATLAS-3852:
-

[~nikhilbonte] - I feel using actual unique attribute (qualifiedName) for 
reference will avoid error and readable when a bulk request got multiple db, 
table and columns. Negative guid is temporary value.

> Entity Bulk Create with unique reference
> 
>
> Key: ATLAS-3852
> URL: https://issues.apache.org/jira/browse/ATLAS-3852
> Project: Atlas
>  Issue Type: Improvement
>Reporter: Vasanth kumar RJ
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: ATLAS-3852.patch, BulkPostCreate.json
>
>
> Entities created in bulk and unique referenced in same request. Bulk create 
> DB, Table and Column referenced in a request itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ATLAS-3852) Entity Bulk Create with unique reference

2020-06-23 Thread Vasanth kumar RJ (Jira)


[ 
https://issues.apache.org/jira/browse/ATLAS-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143201#comment-17143201
 ] 

Vasanth kumar RJ commented on ATLAS-3852:
-

[~madhan] - Currently in Entity Bulk POST API, every entity referenced ( Table 
entity -> DB reference) using guid or unique attribute which is created 
already. We cannot create DB and Table Entity with referenced in same bulk 
request.

Fix to support bulk create DB entity and table entity reference to DB in same 
request.

Use case to support creation new DB entity, its new Table entities and new 
Column Entities in a bulk request.

> Entity Bulk Create with unique reference
> 
>
> Key: ATLAS-3852
> URL: https://issues.apache.org/jira/browse/ATLAS-3852
> Project: Atlas
>  Issue Type: Improvement
>    Reporter: Vasanth kumar RJ
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: ATLAS-3852.patch
>
>
> Entities created in bulk and unique referenced in same request. Bulk create 
> DB, Table and Column referenced in a request itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ATLAS-3852) Entity Bulk Create with unique reference

2020-06-23 Thread Vasanth kumar RJ (Jira)


[ 
https://issues.apache.org/jira/browse/ATLAS-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143152#comment-17143152
 ] 

Vasanth kumar RJ commented on ATLAS-3852:
-

[~mad...@apache.org] [~sarath] Please review this ticket.

> Entity Bulk Create with unique reference
> 
>
> Key: ATLAS-3852
> URL: https://issues.apache.org/jira/browse/ATLAS-3852
> Project: Atlas
>  Issue Type: Improvement
>Reporter: Vasanth kumar RJ
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: ATLAS-3852.patch
>
>
> Entities created in bulk and unique referenced in same request. Bulk create 
> DB, Table and Column referenced in a request itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Review Request 72609: ATLAS-3852: Entity Bulk Create with unique reference

2020-06-21 Thread Vasanth kumar RJ

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72609/
---

Review request for atlas.


Bugs: ATLAS-3852
https://issues.apache.org/jira/browse/ATLAS-3852


Repository: atlas


Description
---

Entities created in bulk and unique referenced in same request. Bulk create DB, 
Table and Column referenced in a request itself.


Diffs
-

  intg/src/test/java/org/apache/atlas/TestUtilsV2.java 2b9cf6e7f 
  
repository/src/main/java/org/apache/atlas/repository/store/graph/v2/AtlasEntityStoreV2.java
 bf1629cb3 
  
repository/src/main/java/org/apache/atlas/repository/store/graph/v2/UniqAttrBasedEntityResolver.java
 d1c3bde11 
  
repository/src/test/java/org/apache/atlas/repository/store/graph/v2/AtlasEntityStoreV2Test.java
 b9cbef1b0 
  webapp/src/test/java/org/apache/atlas/web/adapters/TestEntitiesREST.java 
615bc0f1b 


Diff: https://reviews.apache.org/r/72609/diff/1/


Testing
---


Thanks,

Vasanth kumar RJ



[jira] [Updated] (ATLAS-3852) Entity Bulk Create with unique reference

2020-06-21 Thread Vasanth kumar RJ (Jira)


 [ 
https://issues.apache.org/jira/browse/ATLAS-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasanth kumar RJ updated ATLAS-3852:

Attachment: ATLAS-3852.patch

> Entity Bulk Create with unique reference
> 
>
> Key: ATLAS-3852
> URL: https://issues.apache.org/jira/browse/ATLAS-3852
> Project: Atlas
>  Issue Type: Improvement
>Reporter: Vasanth kumar RJ
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: ATLAS-3852.patch
>
>
> Entities created in bulk and unique referenced in same request. Bulk create 
> DB, Table and Column referenced in a request itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ATLAS-3852) Entity Bulk Create with unique reference

2020-06-21 Thread Vasanth kumar RJ (Jira)
Vasanth kumar RJ created ATLAS-3852:
---

 Summary: Entity Bulk Create with unique reference
 Key: ATLAS-3852
 URL: https://issues.apache.org/jira/browse/ATLAS-3852
 Project: Atlas
  Issue Type: Improvement
Reporter: Vasanth kumar RJ
 Fix For: 2.1.0


Entities created in bulk and unique referenced in same request. Bulk create DB, 
Table and Column referenced in a request itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[Bug 1880388] Re: rpi3b+ becomes unresponsive after closing a program

2020-06-08 Thread RJ
Sure. I have attached the most recent one.

** Attachment added: "kern.log"
   
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1880388/+attachment/5381642/+files/kern.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1880388

Title:
  rpi3b+ becomes unresponsive after closing a program

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1880388/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: Override SECA Definition

2020-05-29 Thread Jason RJ



On 29/05/2020 15:46, Sakthivel Vellingiri wrote:

Hi All -
*Summary:*
Appreciate any pointers, Is it possible to override SECA definition by
overriding the same definition in
hot-deploy//servicedef/secas.xml

*Details:*
There is a SECA definition in applications/order/servicedef/secas.xml, as
you can see noteInfo is hardcoded, and i want to replace wih my noteinfo
without touching code under applications
  
 
 
 
 
 
 
 
 
Is it possible to achieve this by overriding this in hot-deploy that has
extended this component applications/order-extended/servicedef/secas.xml

 
 
 
 
 
 
 
 

I had tried this without success, however i'm using this approach for
overriding services.

regards
Sakthi



Hi Sakthi,

Do you have a reference to secas file in your ofbiz-component.xml:




In theory multiple SECAs can be called so both the original and yours 
will be called, I'm not sure how the priority would be decided, but 
hopefully core SECAs will fire first.


Jason



[Bug 1880388] Re: rpi3b+ becomes unresponsive after closing a program

2020-05-25 Thread RJ
Reportedly this was not an issue with 18.04.3, although I cannot say I
have personally tried this.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1880388

Title:
  rpi3b+ becomes unresponsive after closing a program

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1880388/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1880388] Re: rpi3b+ becomes unresponsive after closing a program

2020-05-24 Thread RJ
** Summary changed:

- rpi3b becomes unresponsive after closing a program
+ rpi3b+ becomes unresponsive after closing a program

** Description changed:

- I am running ROS2 on this raspberry pi 3B that is running as the main
+ I am running ROS2 on this raspberry pi 3B+ that is running as the main
  computer on a turtlebot3. There is two USB devices connected to the RPi,
  one is to communicate with the LIDAR and the other is to communicate
  with the control board, a OpenCR board.
  
  After "bringing the robot up" I open a new shell in tmux, and run `ros2
  topic echo battery_state` which starts outputting ROS2 messages to the
  terminal. Upon ctrl+c'ing out of this, the system become unresponsive. I
  repeated this while connected to the serial console and captured the
  error messages that are being output.
  
  Issue I rose on github:
  https://github.com/ROBOTIS-GIT/turtlebot3/issues/593
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-1011-raspi 5.4.0-1011.11
  ProcVersionSignature: User Name 5.4.0-1011.11-raspi 5.4.34
  Uname: Linux 5.4.0-1011-raspi aarch64
  ApportVersion: 2.20.11-0ubuntu27
  Architecture: arm64
  CasperMD5CheckResult: skip
  Date: Sun May 24 10:31:26 2020
  SourcePackage: linux-raspi
  UpgradeStatus: No upgrade log present (probably fresh install)
  
  Serial console output:
  
  MMC:   mmc@7e202000: 0, mmcnr@7e30: 1
  Loading Environment from FAT... *** Warning - bad CRC, using default 
environment
  
  In:serial
  Out:   vidconsole
  Err:   vidconsole
  Net:   No ethernet found.
  starting USB...
  Bus usb@7e98: scanning bus usb@7e98 for devices... 6 USB Device(s) 
found
     scanning usb for storage devices... 0 Storage Device(s) found
  ## Info: input data size = 6 = 0x6
  Hit any key to stop autoboot:  0
  switch to partitions #0, OK
  mmc0 is current device
  Scanning mmc 0:1...
  Found U-Boot script /boot.scr
  2603 bytes read in 6 ms (422.9 KiB/s)
  ## Executing script at 0240
  8378005 bytes read in 364 ms (21.9 MiB/s)
  Total of 1 halfword(s) were the same
  Decompressing kernel...
  Uncompressed size: 25905664 = 0x18B4A00
  29426757 bytes read in 1262 ms (22.2 MiB/s)
  Booting Ubuntu (with booti) from mmc 0:...
  ## Flattened Device Tree blob at 0260
     Booting using the fdt blob at 0x260
     Using Device Tree in place at 0260, end 02609f2f
  
  Starting kernel ...
  
  [1.966598] spi-bcm2835 3f204000.spi: could not get clk: -517
  ln: /tmp/mountroot-fail-hooks.d//scripts/init-premount/lvm2: No such file or 
directory
  ext4
  Thu Jan  1 00:00:07 UTC 1970
  date: invalid date '  Wed Apr  1 17:23:46 2020'
  -.mount
  dev-mqueue.mount
  sys-kernel-debug.mount
  sys-kernel-tracing.mount
  kmod-static-nodes.service
  systemd-modules-load.service
  systemd-remount-fs.service
  ufw.service
  sys-fs-fuse-connections.mount
  sys-kernel-config.mount
  systemd-sysctl.service
  systemd-random-seed.service
  systemd-sysusers.service
  keyboard-setup.service
  systemd-tmpfiles-setup-dev.service
  swapfile.swap
  lvm2-monitor.service
  systemd-udevd.service
  systemd-journald.service
  systemd-udev-trigger.service
  systemd-journal-flush.service
  [   14.813759] brcmfmac: brcmf_fw_alloc_request: using 
brcm/brcmfmac43455-sdio for chip BCM4345/6
  [   15.128676] brcmfmac: brcmf_fw_alloc_request: using 
brcm/brcmfmac43455-sdio for chip BCM4345/6
  [   15.174878] brcmfmac: brcmf_c_preinit_dcmds: Firmware: BCM4345/6 wl0: Feb 
27 2018 03:15:32 version 7.45.154 (r684107 CY) FWID 01-4fbe0b04
  systemd-rfkill.service
  systemd-udev-settle.service
  netplan-wpa-wlan0.service
  multipathd.service
  [   17.840942] brcmfmac: brcmf_cfg80211_set_power_mgmt: power save enabled
  systemd-fsckd.service
  snap-core18-1708.mount
  snap-core18-1753.mount
  snap-lxd-14808.mount
  snap-lxd-15066.mount
  snap-snapd-7267.mount
  systemd-fsck@dev-disk-by\x2dlabel-system\x2dboot.service
  boot-firmware.mount
  console-setup.service
   Mounting Arbitrary Executable File Formats File System...
  finalrd.service
  [  OK  ] Finished Tell Plymouth To Write Out Runtime Data.
  [  OK  ] Mounted Arbitrary Executable File Formats File System.
  proc-sys-fs-binfmt_misc.mount
  [  OK  ] Finished Create Volatile Files and Directories.
  systemd-tmpfiles-setup.service
   Starting Network Time Synchronization...
   Starting Update UTMP about System Boot/Shutdown...
  [  OK  ] Finished Enable support for…onal executable binary formats.
  binfmt-support.service
  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
  systemd-update-utmp.service
  [  OK  ] Finished Load AppArmor profiles.
  apparmor.service
   Starting Load AppArmor prof… managed internally by snapd...
  [  OK  ] Finished Load AppArmor prof…es managed internally by snapd.
  snapd.apparmor.service
  [  OK  ] Started Network Time Synchronization.
  [  OK  ] Reached target System Time Set.
  [  OK  ] Reached target System Time 

[Bug 1880388] Re: rpi3b becomes unresponsive after closing a program

2020-05-24 Thread RJ
** Description changed:

- I am running ROS2 on this raspberry pi 3B. After "bringing the robot up"
- I open a new shell in tmux, and run `ros2 topic echo battery_state`
- which starts outputting ROS2 messages to the terminal. Upon ctrl+c'ing
- out of this, the system become unresponsive. I connected to the serial
- console and captured the error messages that are being output.
+ I am running ROS2 on this raspberry pi 3B that is running as the main
+ computer on a turtlebot3. There is two USB devices connected to the RPi,
+ one is to communicate with the LIDAR and the other is to communicate
+ with the control board, a OpenCR board.
+ 
+ After "bringing the robot up" I open a new shell in tmux, and run `ros2
+ topic echo battery_state` which starts outputting ROS2 messages to the
+ terminal. Upon ctrl+c'ing out of this, the system become unresponsive. I
+ repeated this while connected to the serial console and captured the
+ error messages that are being output.
  
  Issue I rose on github:
  https://github.com/ROBOTIS-GIT/turtlebot3/issues/593
  
  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: linux-image-5.4.0-1011-raspi 5.4.0-1011.11
  ProcVersionSignature: User Name 5.4.0-1011.11-raspi 5.4.34
  Uname: Linux 5.4.0-1011-raspi aarch64
  ApportVersion: 2.20.11-0ubuntu27
  Architecture: arm64
  CasperMD5CheckResult: skip
  Date: Sun May 24 10:31:26 2020
  SourcePackage: linux-raspi
  UpgradeStatus: No upgrade log present (probably fresh install)
  
- 
  Serial console output:
  
  MMC:   mmc@7e202000: 0, mmcnr@7e30: 1
  Loading Environment from FAT... *** Warning - bad CRC, using default 
environment
  
  In:serial
  Out:   vidconsole
  Err:   vidconsole
  Net:   No ethernet found.
  starting USB...
  Bus usb@7e98: scanning bus usb@7e98 for devices... 6 USB Device(s) 
found
-scanning usb for storage devices... 0 Storage Device(s) found
+    scanning usb for storage devices... 0 Storage Device(s) found
  ## Info: input data size = 6 = 0x6
- Hit any key to stop autoboot:  0 
+ Hit any key to stop autoboot:  0
  switch to partitions #0, OK
  mmc0 is current device
  Scanning mmc 0:1...
  Found U-Boot script /boot.scr
  2603 bytes read in 6 ms (422.9 KiB/s)
  ## Executing script at 0240
  8378005 bytes read in 364 ms (21.9 MiB/s)
  Total of 1 halfword(s) were the same
  Decompressing kernel...
  Uncompressed size: 25905664 = 0x18B4A00
  29426757 bytes read in 1262 ms (22.2 MiB/s)
  Booting Ubuntu (with booti) from mmc 0:...
  ## Flattened Device Tree blob at 0260
-Booting using the fdt blob at 0x260
-Using Device Tree in place at 0260, end 02609f2f
+    Booting using the fdt blob at 0x260
+    Using Device Tree in place at 0260, end 02609f2f
  
  Starting kernel ...
  
  [1.966598] spi-bcm2835 3f204000.spi: could not get clk: -517
  ln: /tmp/mountroot-fail-hooks.d//scripts/init-premount/lvm2: No such file or 
directory
  ext4
  Thu Jan  1 00:00:07 UTC 1970
  date: invalid date '  Wed Apr  1 17:23:46 2020'
  -.mount
  dev-mqueue.mount
  sys-kernel-debug.mount
  sys-kernel-tracing.mount
  kmod-static-nodes.service
  systemd-modules-load.service
  systemd-remount-fs.service
  ufw.service
  sys-fs-fuse-connections.mount
  sys-kernel-config.mount
  systemd-sysctl.service
  systemd-random-seed.service
  systemd-sysusers.service
  keyboard-setup.service
  systemd-tmpfiles-setup-dev.service
  swapfile.swap
  lvm2-monitor.service
  systemd-udevd.service
  systemd-journald.service
  systemd-udev-trigger.service
  systemd-journal-flush.service
  [   14.813759] brcmfmac: brcmf_fw_alloc_request: using 
brcm/brcmfmac43455-sdio for chip BCM4345/6
  [   15.128676] brcmfmac: brcmf_fw_alloc_request: using 
brcm/brcmfmac43455-sdio for chip BCM4345/6
  [   15.174878] brcmfmac: brcmf_c_preinit_dcmds: Firmware: BCM4345/6 wl0: Feb 
27 2018 03:15:32 version 7.45.154 (r684107 CY) FWID 01-4fbe0b04
  systemd-rfkill.service
  systemd-udev-settle.service
  netplan-wpa-wlan0.service
  multipathd.service
  [   17.840942] brcmfmac: brcmf_cfg80211_set_power_mgmt: power save enabled
  systemd-fsckd.service
  snap-core18-1708.mount
  snap-core18-1753.mount
  snap-lxd-14808.mount
  snap-lxd-15066.mount
  snap-snapd-7267.mount
  systemd-fsck@dev-disk-by\x2dlabel-system\x2dboot.service
  boot-firmware.mount
  console-setup.service
-  Mounting Arbitrary Executable File Formats File System...
+  Mounting Arbitrary Executable File Formats File System...
  finalrd.service
  [  OK  ] Finished Tell Plymouth To Write Out Runtime Data.
  [  OK  ] Mounted Arbitrary Executable File Formats File System.
  proc-sys-fs-binfmt_misc.mount
  [  OK  ] Finished Create Volatile Files and Directories.
  systemd-tmpfiles-setup.service
-  Starting Network Time Synchronization...
-  Starting Update UTMP about System Boot/Shutdown...
+  Starting Network Time Synchronization...
+  Starting Update UTMP about 

[Kernel-packages] [Bug 1880388] [NEW] rpi3b becomes unresponsive after closing a program

2020-05-24 Thread RJ
Public bug reported:

I am running ROS2 on this raspberry pi 3B. After "bringing the robot up"
I open a new shell in tmux, and run `ros2 topic echo battery_state`
which starts outputting ROS2 messages to the terminal. Upon ctrl+c'ing
out of this, the system become unresponsive. I connected to the serial
console and captured the error messages that are being output.

Issue I rose on github:
https://github.com/ROBOTIS-GIT/turtlebot3/issues/593

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: linux-image-5.4.0-1011-raspi 5.4.0-1011.11
ProcVersionSignature: User Name 5.4.0-1011.11-raspi 5.4.34
Uname: Linux 5.4.0-1011-raspi aarch64
ApportVersion: 2.20.11-0ubuntu27
Architecture: arm64
CasperMD5CheckResult: skip
Date: Sun May 24 10:31:26 2020
SourcePackage: linux-raspi
UpgradeStatus: No upgrade log present (probably fresh install)


Serial console output:

MMC:   mmc@7e202000: 0, mmcnr@7e30: 1
Loading Environment from FAT... *** Warning - bad CRC, using default environment

In:serial
Out:   vidconsole
Err:   vidconsole
Net:   No ethernet found.
starting USB...
Bus usb@7e98: scanning bus usb@7e98 for devices... 6 USB Device(s) found
   scanning usb for storage devices... 0 Storage Device(s) found
## Info: input data size = 6 = 0x6
Hit any key to stop autoboot:  0 
switch to partitions #0, OK
mmc0 is current device
Scanning mmc 0:1...
Found U-Boot script /boot.scr
2603 bytes read in 6 ms (422.9 KiB/s)
## Executing script at 0240
8378005 bytes read in 364 ms (21.9 MiB/s)
Total of 1 halfword(s) were the same
Decompressing kernel...
Uncompressed size: 25905664 = 0x18B4A00
29426757 bytes read in 1262 ms (22.2 MiB/s)
Booting Ubuntu (with booti) from mmc 0:...
## Flattened Device Tree blob at 0260
   Booting using the fdt blob at 0x260
   Using Device Tree in place at 0260, end 02609f2f

Starting kernel ...

[1.966598] spi-bcm2835 3f204000.spi: could not get clk: -517
ln: /tmp/mountroot-fail-hooks.d//scripts/init-premount/lvm2: No such file or 
directory
ext4
Thu Jan  1 00:00:07 UTC 1970
date: invalid date '  Wed Apr  1 17:23:46 2020'
-.mount
dev-mqueue.mount
sys-kernel-debug.mount
sys-kernel-tracing.mount
kmod-static-nodes.service
systemd-modules-load.service
systemd-remount-fs.service
ufw.service
sys-fs-fuse-connections.mount
sys-kernel-config.mount
systemd-sysctl.service
systemd-random-seed.service
systemd-sysusers.service
keyboard-setup.service
systemd-tmpfiles-setup-dev.service
swapfile.swap
lvm2-monitor.service
systemd-udevd.service
systemd-journald.service
systemd-udev-trigger.service
systemd-journal-flush.service
[   14.813759] brcmfmac: brcmf_fw_alloc_request: using brcm/brcmfmac43455-sdio 
for chip BCM4345/6
[   15.128676] brcmfmac: brcmf_fw_alloc_request: using brcm/brcmfmac43455-sdio 
for chip BCM4345/6
[   15.174878] brcmfmac: brcmf_c_preinit_dcmds: Firmware: BCM4345/6 wl0: Feb 27 
2018 03:15:32 version 7.45.154 (r684107 CY) FWID 01-4fbe0b04
systemd-rfkill.service
systemd-udev-settle.service
netplan-wpa-wlan0.service
multipathd.service
[   17.840942] brcmfmac: brcmf_cfg80211_set_power_mgmt: power save enabled
systemd-fsckd.service
snap-core18-1708.mount
snap-core18-1753.mount
snap-lxd-14808.mount
snap-lxd-15066.mount
snap-snapd-7267.mount
systemd-fsck@dev-disk-by\x2dlabel-system\x2dboot.service
boot-firmware.mount
console-setup.service
 Mounting Arbitrary Executable File Formats File System...
finalrd.service
[  OK  ] Finished Tell Plymouth To Write Out Runtime Data.
[  OK  ] Mounted Arbitrary Executable File Formats File System.
proc-sys-fs-binfmt_misc.mount
[  OK  ] Finished Create Volatile Files and Directories.
systemd-tmpfiles-setup.service
 Starting Network Time Synchronization...
 Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Finished Enable support for…onal executable binary formats.
binfmt-support.service
[  OK  ] Finished Update UTMP about System Boot/Shutdown.
systemd-update-utmp.service
[  OK  ] Finished Load AppArmor profiles.
apparmor.service
 Starting Load AppArmor prof… managed internally by snapd...
[  OK  ] Finished Load AppArmor prof…es managed internally by snapd.
snapd.apparmor.service
[  OK  ] Started Network Time Synchronization.
[  OK  ] Reached target System Time Set.
[  OK  ] Reached target System Time Synchronized.
systemd-timesyncd.service
[   25.080879] cloud-init[1148]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 
running 'init-local' at Sun, 24 May 2020 08:15:54 +. Up 24.20 seconds.
[  OK  ] Finished Initial cloud-init job (pre-networking).
[  OK  ] Reached target Network (Pre).
cloud-init-local.service
 Starting Network Service...
[  OK  ] Started Network Service.
systemd-networkd.service
 Starting Wait for Network to be Configured...
 Starting Network Name Resolution...
[  OK  ] Finished Wait for Network to be Configured.
systemd-networkd-wait-online.service
 Starting Initial cloud-init…b (metadata service crawler)...
[  OK  ] 

[Bug 1880388] [NEW] rpi3b becomes unresponsive after closing a program

2020-05-24 Thread RJ
Public bug reported:

I am running ROS2 on this raspberry pi 3B. After "bringing the robot up"
I open a new shell in tmux, and run `ros2 topic echo battery_state`
which starts outputting ROS2 messages to the terminal. Upon ctrl+c'ing
out of this, the system become unresponsive. I connected to the serial
console and captured the error messages that are being output.

Issue I rose on github:
https://github.com/ROBOTIS-GIT/turtlebot3/issues/593

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: linux-image-5.4.0-1011-raspi 5.4.0-1011.11
ProcVersionSignature: User Name 5.4.0-1011.11-raspi 5.4.34
Uname: Linux 5.4.0-1011-raspi aarch64
ApportVersion: 2.20.11-0ubuntu27
Architecture: arm64
CasperMD5CheckResult: skip
Date: Sun May 24 10:31:26 2020
SourcePackage: linux-raspi
UpgradeStatus: No upgrade log present (probably fresh install)


Serial console output:

MMC:   mmc@7e202000: 0, mmcnr@7e30: 1
Loading Environment from FAT... *** Warning - bad CRC, using default environment

In:serial
Out:   vidconsole
Err:   vidconsole
Net:   No ethernet found.
starting USB...
Bus usb@7e98: scanning bus usb@7e98 for devices... 6 USB Device(s) found
   scanning usb for storage devices... 0 Storage Device(s) found
## Info: input data size = 6 = 0x6
Hit any key to stop autoboot:  0 
switch to partitions #0, OK
mmc0 is current device
Scanning mmc 0:1...
Found U-Boot script /boot.scr
2603 bytes read in 6 ms (422.9 KiB/s)
## Executing script at 0240
8378005 bytes read in 364 ms (21.9 MiB/s)
Total of 1 halfword(s) were the same
Decompressing kernel...
Uncompressed size: 25905664 = 0x18B4A00
29426757 bytes read in 1262 ms (22.2 MiB/s)
Booting Ubuntu (with booti) from mmc 0:...
## Flattened Device Tree blob at 0260
   Booting using the fdt blob at 0x260
   Using Device Tree in place at 0260, end 02609f2f

Starting kernel ...

[1.966598] spi-bcm2835 3f204000.spi: could not get clk: -517
ln: /tmp/mountroot-fail-hooks.d//scripts/init-premount/lvm2: No such file or 
directory
ext4
Thu Jan  1 00:00:07 UTC 1970
date: invalid date '  Wed Apr  1 17:23:46 2020'
-.mount
dev-mqueue.mount
sys-kernel-debug.mount
sys-kernel-tracing.mount
kmod-static-nodes.service
systemd-modules-load.service
systemd-remount-fs.service
ufw.service
sys-fs-fuse-connections.mount
sys-kernel-config.mount
systemd-sysctl.service
systemd-random-seed.service
systemd-sysusers.service
keyboard-setup.service
systemd-tmpfiles-setup-dev.service
swapfile.swap
lvm2-monitor.service
systemd-udevd.service
systemd-journald.service
systemd-udev-trigger.service
systemd-journal-flush.service
[   14.813759] brcmfmac: brcmf_fw_alloc_request: using brcm/brcmfmac43455-sdio 
for chip BCM4345/6
[   15.128676] brcmfmac: brcmf_fw_alloc_request: using brcm/brcmfmac43455-sdio 
for chip BCM4345/6
[   15.174878] brcmfmac: brcmf_c_preinit_dcmds: Firmware: BCM4345/6 wl0: Feb 27 
2018 03:15:32 version 7.45.154 (r684107 CY) FWID 01-4fbe0b04
systemd-rfkill.service
systemd-udev-settle.service
netplan-wpa-wlan0.service
multipathd.service
[   17.840942] brcmfmac: brcmf_cfg80211_set_power_mgmt: power save enabled
systemd-fsckd.service
snap-core18-1708.mount
snap-core18-1753.mount
snap-lxd-14808.mount
snap-lxd-15066.mount
snap-snapd-7267.mount
systemd-fsck@dev-disk-by\x2dlabel-system\x2dboot.service
boot-firmware.mount
console-setup.service
 Mounting Arbitrary Executable File Formats File System...
finalrd.service
[  OK  ] Finished Tell Plymouth To Write Out Runtime Data.
[  OK  ] Mounted Arbitrary Executable File Formats File System.
proc-sys-fs-binfmt_misc.mount
[  OK  ] Finished Create Volatile Files and Directories.
systemd-tmpfiles-setup.service
 Starting Network Time Synchronization...
 Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Finished Enable support for…onal executable binary formats.
binfmt-support.service
[  OK  ] Finished Update UTMP about System Boot/Shutdown.
systemd-update-utmp.service
[  OK  ] Finished Load AppArmor profiles.
apparmor.service
 Starting Load AppArmor prof… managed internally by snapd...
[  OK  ] Finished Load AppArmor prof…es managed internally by snapd.
snapd.apparmor.service
[  OK  ] Started Network Time Synchronization.
[  OK  ] Reached target System Time Set.
[  OK  ] Reached target System Time Synchronized.
systemd-timesyncd.service
[   25.080879] cloud-init[1148]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 
running 'init-local' at Sun, 24 May 2020 08:15:54 +. Up 24.20 seconds.
[  OK  ] Finished Initial cloud-init job (pre-networking).
[  OK  ] Reached target Network (Pre).
cloud-init-local.service
 Starting Network Service...
[  OK  ] Started Network Service.
systemd-networkd.service
 Starting Wait for Network to be Configured...
 Starting Network Name Resolution...
[  OK  ] Finished Wait for Network to be Configured.
systemd-networkd-wait-online.service
 Starting Initial cloud-init…b (metadata service crawler)...
[  OK  ] 

[Conselhobrasil] profmarcilio extended their membership

2020-04-12 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Marcilio Bergami de Carvalho (profmarcilio) renewed their own membership
in the Ubuntu Brasil - RJ (ubuntu-br-rj) team until 2021-04-16.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.

___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] flavio-bordoni extended their membership

2020-04-12 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Flavio Bordoni (flavio-bordoni) renewed their own membership in the
Ubuntu Brasil - RJ (ubuntu-br-rj) team until 2021-04-17.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.

___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] mdantas extended their membership

2020-04-12 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Alexander Pindarov (mdantas) renewed their own membership in the Ubuntu
Brasil - RJ (ubuntu-br-rj) team until 2021-04-16.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.

___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


[Conselhobrasil] fcostapb extended their membership

2020-04-09 Thread Ubuntu Brasil - RJ
Hello Conselho Ubuntu Brasil,

Francisco Costa (fcostapb) renewed their own membership in the Ubuntu
Brasil - RJ (ubuntu-br-rj) team until 2021-04-16.
<https://launchpad.net/~ubuntu-br-rj>

Regards,
The Launchpad team

-- 
You received this email because your team Conselho Ubuntu Brasil is an admin of 
the Ubuntu Brasil - RJ team.

___
Mailing list: https://launchpad.net/~conselhobrasil
Post to : conselhobrasil@lists.launchpad.net
Unsubscribe : https://launchpad.net/~conselhobrasil
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   7   8   9   10   >