[Mpi-forum] MPI Forum Meeting Registration

2020-09-26 Thread Wesley Bland via mpi-forum
Hi all,

In all the excitement with the EuroMPI Conference last week, I forgot to send 
out a reminder to register for the upcoming MPI Forum meeting. The page has 
been posted for a while, but I don’t think I ever sent out an email.

For everyone who will be attending, please register ASAP. The deadline is 11:30 
AM US Central on Monday as that is the announced time for the first voting 
block.

https://forms.gle/wePXwqJZfuhH7DyX7

The votes are also posted on the agenda page already and I’ll be updating the 
voting page in the next few minutes. There was one vote that was announced but 
missed on the agenda page. As a reminder, there will be a “no-no” vote on #137: 
“The Embiggenment”.

Thanks,
Wes
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [PATCH v3] usb: dwc3: Stop active transfers before halting the controller

2020-09-25 Thread Wesley Cheng



On 9/24/2020 11:06 PM, Felipe Balbi wrote:
> 
> Hi,
> 
> Alan Stern  writes:
>>>> Hence, the reason if there was already a pending IRQ triggered, the
>>>> dwc3_gadget_disable_irq() won't ensure the IRQ is handled.  We can do
>>>> something like:
>>>> if (!is_on)
>>>>dwc3_gadget_disable_irq()
>>>> synchronize_irq()
>>>> spin_lock_irqsave()
>>>> if(!is_on) {
>>>> ...
>>>>
>>>> But the logic to only apply this on the pullup removal case is a little
>>>> messy.  Also, from my understanding, the spin_lock_irqsave() will only
>>>> disable the local CPU IRQs, but not the interrupt line on the GIC, which
>>>> means other CPUs can handle it, unless we explicitly set the IRQ
>>>> affinity to CPUX.
>>>
>>> Yeah, the way I understand this can't really happen. But I'm open to
>>> being educated. Maybe Alan can explain if this is really possibility?
>>

Hi Felipe/Alan,

Thanks for the detailed explanations and inputs.  Useful information to
have!

>> It depends on the details of the hardware, but yes, it is possible in
>> general for an interrupt handler to run after you have turned off the
>> device's interrupt-request line.  For example:
>>
>>  CPU A   CPU B
>>  --- --
>>  Gets an IRQ from the device
>>  Calls handler routine   spin_lock_irq
>>spin_lock_irq Turns off the IRQ line
>>...spins...   spin_unlock_irq
>>Rest of handler runs
>>spin_unlock_irq
>>
>> That's why we have synchronize_irq().  The usual pattern is something
>> like this:
>>
>>  spin_lock_irq(>lock);
>>  priv->disconnected = true;
>>  my_disable_irq(priv);
>>  spin_unlock_irq(>lock);
>>  synchronize_irq(priv->irq);
>>
>> And of course this has to be done in a context that can sleep.
>>
>> Does this answer your question?
> 
> It does, thank you Alan. It seems like we don't need a call to
> disable_irq(), only synchronize_irq() is enough, however it should be
> called with spinlocks released, not held.
> 

I mean...I'm not against using the synchronize_irq() +
dwc3_gadget_disable_irq() route, since that will address the concern as
well.  It was just with the disable/enable IRQ route, I didn't need to
explicitly check the is_on flag again, since I didn't need to worry
about overwriting the DEVTEN reg (for the pullup enable case).  Will
include this on the next version.

Thanks
Wesley Cheng

> Thanks
> 

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: Want to get informations about bets Django hosting plateform

2020-09-21 Thread Wesley Montcho
OK

THANKS

Le lun. 21 sept. 2020 à 12:50, Andréas Kühne  a
écrit :

> I don't have any experience of them - so I really don't know. I have used
> heroku (or I am using it currently).
>
> Regards,
>
> Andréas
>
>
> Den mån 21 sep. 2020 kl 13:01 skrev Wesley Montcho <
> wesleymont...@gmail.com>:
>
>> Thanks you.
>>
>>
>>
>> What do you think about Scalingo ?
>>
>> Le lun. 21 sept. 2020 à 11:29, Andréas Kühne 
>> a écrit :
>>
>>> This isn't an easy question to answer.
>>>
>>> Unfortunately there is no real "goto" shop for django. There are however
>>> several ways to accomplish this. If you want a simple startup solution -
>>> try Heroku. It'll be sufficient for small to medium sized handling and you
>>> don't need to do any setup yourself.
>>> Another "easy" alternative would be to look into google appengine -
>>> however I am not sure if it is possible to setup for python 3 ... I know
>>> this has been a problem previously.
>>> Other than those 2 I think you would need to setup your own virtual
>>> server and handle it that way - which may be the best solution as long as
>>> you know a little about it - and do it following a good guide.
>>>
>>> Regards,
>>>
>>> Andréas
>>>
>>>
>>> Den sön 20 sep. 2020 kl 17:36 skrev wesley...@gmail.com <
>>> wesleymont...@gmail.com>:
>>>
>>>> Hi guys
>>>>
>>>> Please I want some person to help me about choosing hosting plateform
>>>> for my Django project, a plateforme that offers an extensible offer without
>>>> problem for migration through version of thé ptoject.
>>>>
>>>> Hope you will help me 
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Django users" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to django-users+unsubscr...@googlegroups.com.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/django-users/8c6c458d-00b8-4ed3-b49c-2a119d11c86en%40googlegroups.com
>>>> <https://groups.google.com/d/msgid/django-users/8c6c458d-00b8-4ed3-b49c-2a119d11c86en%40googlegroups.com?utm_medium=email_source=footer>
>>>> .
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Django users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to django-users+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/django-users/CAK4qSCc64_v8GXOJXXu%2B5cQMGp4vFF%2ByWsVX%3DKDtAx8bygZ_fg%40mail.gmail.com
>>> <https://groups.google.com/d/msgid/django-users/CAK4qSCc64_v8GXOJXXu%2B5cQMGp4vFF%2ByWsVX%3DKDtAx8bygZ_fg%40mail.gmail.com?utm_medium=email_source=footer>
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Django users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to django-users+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/django-users/CAGRNXvT2ubUTruYO_672P%2BYuXhETK8ru27qpK_voCWDgP57fuw%40mail.gmail.com
>> <https://groups.google.com/d/msgid/django-users/CAGRNXvT2ubUTruYO_672P%2BYuXhETK8ru27qpK_voCWDgP57fuw%40mail.gmail.com?utm_medium=email_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-users/CAK4qSCf8axrDLmYK5YthF3MuSfCT4FnC_ONrWPH9XzxnPBDdNQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/django-users/CAK4qSCf8axrDLmYK5YthF3MuSfCT4FnC_ONrWPH9XzxnPBDdNQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/CAGRNXvS4A3X8XQYRymO2MVzTzzBz2gEhr01cttYna9c9f1%3DVEA%40mail.gmail.com.


Re: Want to get informations about bets Django hosting plateform

2020-09-21 Thread Wesley Montcho
Thanks you.



What do you think about Scalingo ?

Le lun. 21 sept. 2020 à 11:29, Andréas Kühne  a
écrit :

> This isn't an easy question to answer.
>
> Unfortunately there is no real "goto" shop for django. There are however
> several ways to accomplish this. If you want a simple startup solution -
> try Heroku. It'll be sufficient for small to medium sized handling and you
> don't need to do any setup yourself.
> Another "easy" alternative would be to look into google appengine -
> however I am not sure if it is possible to setup for python 3 ... I know
> this has been a problem previously.
> Other than those 2 I think you would need to setup your own virtual server
> and handle it that way - which may be the best solution as long as you know
> a little about it - and do it following a good guide.
>
> Regards,
>
> Andréas
>
>
> Den sön 20 sep. 2020 kl 17:36 skrev wesley...@gmail.com <
> wesleymont...@gmail.com>:
>
>> Hi guys
>>
>> Please I want some person to help me about choosing hosting plateform for
>> my Django project, a plateforme that offers an extensible offer without
>> problem for migration through version of thé ptoject.
>>
>> Hope you will help me 
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Django users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to django-users+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/django-users/8c6c458d-00b8-4ed3-b49c-2a119d11c86en%40googlegroups.com
>> <https://groups.google.com/d/msgid/django-users/8c6c458d-00b8-4ed3-b49c-2a119d11c86en%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-users/CAK4qSCc64_v8GXOJXXu%2B5cQMGp4vFF%2ByWsVX%3DKDtAx8bygZ_fg%40mail.gmail.com
> <https://groups.google.com/d/msgid/django-users/CAK4qSCc64_v8GXOJXXu%2B5cQMGp4vFF%2ByWsVX%3DKDtAx8bygZ_fg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/CAGRNXvT2ubUTruYO_672P%2BYuXhETK8ru27qpK_voCWDgP57fuw%40mail.gmail.com.


Want to get informations about bets Django hosting plateform

2020-09-20 Thread wesley...@gmail.com
Hi guys 

Please I want some person to help me about choosing hosting plateform for 
my Django project, a plateforme that offers an extensible offer without 
problem for migration through version of thé ptoject. 

Hope you will help me 

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/8c6c458d-00b8-4ed3-b49c-2a119d11c86en%40googlegroups.com.


[Mpi-forum] EuroMPI/USA 2020 Taking Place Next Week

2020-09-18 Thread Wesley Bland via mpi-forum
Hi all,

As Martin and I have mentioned many times in the various meetings recently, we 
won’t have a virtual meeting next week because we will be attending the 
EuroMPI/USA 2020 conference. I wanted to make sure that you were all aware that 
you are also welcome to attend and provide information to do so.

 The conference itself is free for attendees and the schedule can be found here:

https://eurompi.github.io/program.html 

We have limited the times to 10:00am Eastern US to 2:00pm Eastern US to be a 
little more friendly to both the US West Coast and the European time zones.

To join, you will need to register for each day independently on the same page 
linked above.

Feel free to join any or all of the sessions. If there’s any more information I 
can provide, please let me know.

Thanks,
Wes___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [google-appengine] Can we create a billing account without turning on the free $300/90 days credit?

2020-09-17 Thread wesley chun
Also, track this SO question <http://stackoverflow.com/questions/63915713>
as some Googlers are active on it. And if you feel compelled to contact
Cloud billing/support folks sooner than later, here are some links:

   - https://console.cloud.google.com/support/chat
   - https://cloud.google.com/support/billing


On Wed, Sep 16, 2020 at 7:35 AM 'David (Cloud Platform Support)' via Google
App Engine  wrote:

> As per the free trial documentation
> <https://cloud.google.com/free/docs/gcp-free-tier>, the free trial will
> start automatically when you set up your billing account so you must be
> prepared to use your free trial when doing this.
> On Tuesday, September 15, 2020 at 7:21:59 PM UTC-4 NP wrote:
>
>> >>>> then create a billing account there
>> <https://console.cloud.google.com/billing/create> and see if you can do
>> so w/o activting the Free Trial <<<
>>
>> Right now, it looks to me like it will activate it. I don't want to
>> complete the step in case it does activate the free trial (since there's no
>> way to turn it off). Is there any way to confirm if it will or will not
>> activate the free trial?
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to google-appengine+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/google-appengine/7dd76c05-b8e7-4ee3-845e-337e6ba45cbdn%40googlegroups.com
> <https://groups.google.com/d/msgid/google-appengine/7dd76c05-b8e7-4ee3-845e-337e6ba45cbdn%40googlegroups.com?utm_medium=email_source=footer>
> .
>


-- 
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"A computer never does what you want... only what you tell it."
wesley chun :: @wescpy <http://twitter.com/wescpy> :: Software
Architect & Engineer
Developer Advocate at Google Cloud by day; at night...
Python training & consulting : http://CyberwebConsulting.com
"Core Python" books : http://CorePython.com
Python blog: http://wescpy.blogspot.com

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAB6eaA4VVMO0gNUx1kNFEZa1MaphMYZKPFRSBrNjKqsm%3DyxTiQ%40mail.gmail.com.


Re: [google-appengine] Re: Not able to run dev_appserver with Python 3 only environment

2020-09-17 Thread wesley chun
Hi Ritesh, let me offer some alternatives in addition to Alexis'
recommendations:

   1. I would avoid using Homebrew for installing the Cloud SDK... we have
   a note at the bottom of the first section of the App Engine install docs
   
<https://cloud.google.com/appengine/docs/standard/python3/setting-up-environment>
   that says, "Avoid using a package manager such as apt or yum to install the
   Cloud SDK." Homebrew is a package manager.
   2. Here are the Cloud SDK recommended installation instructions
   <https://cloud.google.com/sdk/docs/install>
   3. When you called  `dev_appserver.py` above, it looks like it was
   looking for Python 2 which it couldn't find. I'm wondering whether this is
   because: 1) you have an older version of dev_appserver.py that was based on
   python2, or 2) you didn't activate your virtualenv and somehow you were
   calling Apple's Python 2 binary. Anyway, if you reinstall the SDK, you
   shouldn't see this issue again.
   4. The Cloud SDK should be installed in your home directory where
   dev_appserver.py should be available in its bin directory, i.e.,
   ~/google-cloud-sdk/bin/dev_appserver.py
   5. OTOH (on the other hand), since you're no longer using `webapp2` and
   likely using Flask, you should be able to start the development server that
   comes with Flask: `python main.py` should work, esp. if you're using the
   QuickStart sample, making it so you don't have to use dev_appserver.py any
   more.
   6. If you're going to use Datastore w/the App Engine NDB library (Py2)
   back in the day, to run on Py3, you would do a minor migration to the
   *Cloud* NDB library
   
<https://cloud.google.com/appengine/docs/standard/python3/migrating-to-cloud-ndb>
to
   keep the same interface to Datastore you're familiar with.
   7. If instead, you want to use the newer interface to Datastore, you'd
   use the Cloud Datastore client library
   <https://cloud.google.com/datastore> instead of Cloud NDB.
   8. To make things slightly *more* complex, you should know the next
   generation of Datastore has been released, and it had a slight product
   rebrand as Cloud Firestore <http://cloud.google.com/firestore> to
   indicate it has inherited some key features of the Firebase realtime
   database <https://firebase.google.com/docs/database/rtdb-vs-firestore>,
   however storing & querying data w/Firestore is quite different
   <https://cloud.google.com/firestore/docs/quickstart-servers> from
   Datastore, so for the time being & backwards-compatibility for Datastore
   users, developers can choose to run Cloud Firestore in Datastore mode (and
   use the Datastore client library)
   9. If instead you'd prefer to start from scratch with the latest
   hotness, then you can use Cloud Firestore in native mode. Here are
comparisons
   b/w Firestore in native vs Datastore modes
   <https://cloud.google.com/datastore/docs/firestore-or-datastore> to help
   you decide.
   10. The Datastore emulator is now a Java app... read more about it here
   <https://cloud.google.com/datastore/docs/tools/datastore-emulator>

Anyway, hope some of this helps!
--Wesley


On Thu, Aug 13, 2020 at 12:34 PM 'Alexis (Google Cloud Platform Support)'
via Google App Engine  wrote:

> Hello Ritesh,
>
> I looked in our internal issues and I did not find this error posted. It
> could be a new one. However, a quick Google search shows that someone
> here[1] downgraded to an older version of the SDK and it worked. Older
> versions of the SDK can be found here[2].
>
> If it does work with an older version, it could mean that there is an
> issue. And we would need to report that in the issue tracker here[3], under
> "Cloud SDK issues". However, there are many other possibilities (too many
> not visible) and it would be good to isolate further without brew, etc..
> Just for testing under which environment/config this happens.
>
> If you could, please let us know if it works with an older version of
> Cloud SDK and under which specific environment. Otherwise, this may need to
> be a support ticket. Thank you in advance.
>
> [1]
> https://www.reddit.com/r/googlecloud/comments/gdndx4/gcloud_sdk_update_from_28900_to_29000_on_mac/
> [2] https://cloud.google.com/sdk/docs/downloads-versioned-archives#archive
> [3] https://cloud.google.com/support/docs/issue-trackers
>
> On Wednesday, August 12, 2020 at 6:37:14 AM UTC-4, Ritesh Nadhani wrote:
>>
>> Hello
>>
>> I am getting back to GAE after many years. Things have definitely
>> changed. As a new project, I got started with Python 37 runtime. My
>> virtualenv is created using: python3 -m venv ENV and thus does not have
>> python2.
>>
>> When I try to run my app in local mode using dev_appserver.py (i am just
>> trying to play around with the sample app from the documen

Re: [prometheus-users] Prometheus disk I/O metrics

2020-09-15 Thread Wesley Peng

Brian,

Do you know if we can implement a Lua exporter within nginx who take 
application's APM and report to prometheus?


Thank you.


Brian Candler wrote:
Just to add, the data collected by node_exporter maps closely to the raw 
stats exposed by the kernel, so the kernel documentation is helpful:


--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/2e0ca03a-4747-f214-d693-2e54996f01c8%40pobox.com.


Re: [prometheus-users] Prometheus disk I/O metrics

2020-09-14 Thread Wesley Peng

You can calculate it from the basic IO metrics Prometheus provided.

Regards.


rsch...@gmail.com wrote:
In datadog I used metrics "system.io.await" to create alert on my 
linux instances. What is the equivalent metrics in prometheus? 


--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/500d9891-b3cb-9a0e-1424-fc45cd614978%40pobox.com.


Re: [google-appengine] Can we create a billing account without turning on the free $300/90 days credit?

2020-09-14 Thread wesley chun
Hello. You're correct that appcfg.py
<https://cloud.google.com/appengine/docs/standard/python/tools/appcfg-arguments>
is going away
<https://cloud.google.com/appengine/docs/standard/java/tools/migrating-from-appcfg-to-gcloud>.
Try going to the cloud console <https://console.cloud.google.com>, select
your App Engine project from the top, then create a billing account there
<https://console.cloud.google.com/billing/create> and see if you can do so
w/o activting the Free Trial (which I agree with you on -- no one should
activate it until they have enough regular use to take advantage of it,
esp. since it expires in 3 mos). Once you've created a billing account
using a credit card, you s/b able to use gcloud deploy successfully to
deploy.

Can you also privately send me some terminal screenshots of gcloud deploy
asking for a billing account, and if a new one is created, it automatically
activates the Cloud Console?

Thank you,
--Wesley

On Mon, Sep 14, 2020 at 11:36 AM NP  wrote:

> Hello,
>
> GAE no longer allows deployment to production using app cfg. They insist
> you must use "gcloud deploy".
>
> However, when you try to use "gcloud deploy", it insists you must enable a
> billing account and enabling this automatically turns on the 'free' $300
> credit which expires in 90 days or so.
>
>
> Given that I'm still in early stages of development (only deploying to
> production to make sure everything works), I do not wish to waste the 90
> days/free $300 credit.
>
> So my questions are
>
> 1) Is it possible to create a new billing account without turning on the
> free $300/90 days credit?
>
> 2) Is there still a way to use appcfg.py to deploy your app (which means I
> don't need the answer to question 1)?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to google-appengine+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/google-appengine/30bfd66c-6281-40ca-bfc0-6fb206c86c30o%40googlegroups.com
> <https://groups.google.com/d/msgid/google-appengine/30bfd66c-6281-40ca-bfc0-6fb206c86c30o%40googlegroups.com?utm_medium=email_source=footer>
> .
>


-- 
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"A computer never does what you want... only what you tell it."
wesley chun :: @wescpy <http://twitter.com/wescpy> :: Software
Architect & Engineer
Developer Advocate at Google Cloud by day; at night...
Python training & consulting : http://CyberwebConsulting.com
"Core Python" books : http://CorePython.com
Python blog: http://wescpy.blogspot.com

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAB6eaA5-6%2By-URLrTAsbo4qdyCZoAY_0%2BZVGWyjPEmKnEH%3DP1Q%40mail.gmail.com.


Re: [prometheus-users] Prometheus.service status failed

2020-09-14 Thread Wesley Peng

are you running 64bit program on the 32bit OS, or reverse?

regards.

Suryaprakash Kancharlapalli wrote:
As i have setup the Prometheus configuration using from the binary. But 
in the end when i start the service it shows like Prometheus startup 
service failed. Please find the attached snap for reference. Can some 
suggest what could be the reason and how can i resolve it.


--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/57ce9c9b-e7c3-5089-d3f2-2004c047c839%40pobox.com.


Re: [prometheus-users] Alert Manager source URL to point to TLS URL of Prometheus

2020-09-13 Thread Wesley Peng
You can setup Nginx to proxy both Alertmanager and Prometheus itself on 
different http port.


regards.


sunils...@gmail.com wrote:
Now the challenge is , AlertManager is directly associated with 
PRometheus and when I access AlertManager , All the source links ate 
pointing to Prometheus directly .

Can we configure alertmanager source URL to TLS URL of Prometheus  ?


--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/427ffa62-06f1-9e59-d613-310ddcf1a32b%40pobox.com.


Re: cache a object in modperl

2020-09-13 Thread Wesley Peng




Mithun Bhattacharya wrote:
Does IANA have an easy way of determining whether there is an update 
since a certain date ? I was thinking it might make sense to just run a 
scheduled job to monitor for update and then restart your service or 
refresh your local cache depending upon how you solve it.


Yes I agree with this.
I may monitor IANA's database via their version changes, and run a 
crontab to restart my apache server during the non-active user time 
(i.e, 3:00 AM).


Or do you have better solution?
Thanks.


Re: cache a object in modperl

2020-09-13 Thread Wesley Peng

Hello

Mithun Bhattacharya wrote:
How frequently do you wish to refresh the cache ? if you do in startup 
then your cache refresh is tied to the service restart which might not 
be ideal or feasible.


I saw recent days IANA has updated their database on date of:

2020.09.09
2020.09.13

So I assume they will update the DB file in few days.

Regards.


Re: cache a object in modperl

2020-09-13 Thread Wesley Peng

That's great. Thank you Adam.

Adam Prime wrote:
If the database doesn't change very often, and you don't mind only 
getting updates to your database when you restart apache, and you're 
using prefork mod_perl, then you could use a startup.pl to load your 
database before apache forks, and get a shared copy globally in all your 
apache children.


https://perl.apache.org/docs/1.0/guide/config.html#The_Startup_File

This thread from 13 years ago seems to have a clear-ish example of how 
to use startup.pl to do what i'm talking about.


If you need it to update more frequently than when you restart apache, 
you could potentially use a PerlChildInitHandler to load the data when 
apache creates children.  This will use more memory, as each child will 
have it's own copy, and can also result in situation where children can 
have different versions of the database loaded and be serving requests 
at the same time.  If you want to go this way you might want to also add 
a MaxRequestsPerChild directive to your apache config to make sure that 
you're children die and get refreshed on the regular, if you don't 
already have one.


Adam


On 9/13/2020 10:51 PM, Wesley Peng wrote:

Hello

I am not so familiar with modperl.

For work requirement, I need to access IANA TLD database.

So I wrote this perl module:
https://metacpan.org/pod/Net::IANA::TLD

But, for each new() in the module, the database file will be 
downloaded from IANA's website.


I know this is pretty Inefficient.

My question is, can I cache the new'ed object by modperl?

If so, how to do?

Thanks.


cache a object in modperl

2020-09-13 Thread Wesley Peng

Hello

I am not so familiar with modperl.

For work requirement, I need to access IANA TLD database.

So I wrote this perl module:
https://metacpan.org/pod/Net::IANA::TLD

But, for each new() in the module, the database file will be downloaded 
from IANA's website.


I know this is pretty Inefficient.

My question is, can I cache the new'ed object by modperl?

If so, how to do?

Thanks.


[Mpi-forum] FTWG Vote Announcements

2020-09-09 Thread Wesley Bland via mpi-forum
Hi all,

Here’s the announcements from the FTWG for the upcoming meeting:

For MPI 4.0 (previously 4.1, but now fits the new timeline):

Second Vote - Clarification: Section 8.5 - p. 377 - Error Class and Code Advice 
to Implementors
Issue - #259 
Pull Request - #178 

For MPI 4.1:

First Vote - Add MPI_Remove_error_class, MPI_Remove_error_code, 
MPI_Remove_error_string
Issue - #283 
Pull Request - #166 

Thanks,
Wes___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [PATCH v3] usb: dwc3: Stop active transfers before halting the controller

2020-09-08 Thread Wesley Cheng



On 9/6/2020 11:20 PM, Felipe Balbi wrote:
> 
> Hi,
> 
> Wesley Cheng  writes:
>> diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
>> index 59f2e8c31bd1..456aa87e8778 100644
>> --- a/drivers/usb/dwc3/ep0.c
>> +++ b/drivers/usb/dwc3/ep0.c
>> @@ -197,7 +197,7 @@ int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct 
>> usb_request *request,
>>  int ret;
>>  
>>  spin_lock_irqsave(>lock, flags);
>> -if (!dep->endpoint.desc) {
>> +if (!dep->endpoint.desc || !dwc->pullups_connected) {
> 
> this looks odd. If we don't have pullups connected, we shouldn't have a
> descriptor, likewise if we don't have a a description, we haven't been
> enumerated, therefore we shouldn't have pullups connected.
> 
> What am I missing here?
> 

Hi Felipe,

When we
echo "" > /sys/kernel/config/usb_gadget/g1/UDC

This triggers the usb_gadget_disconnect() routine to execute.

int usb_gadget_disconnect(struct usb_gadget *gadget)
{
...
ret = gadget->ops->pullup(gadget, 0);
if (!ret) {
gadget->connected = 0;
gadget->udc->driver->disconnect(gadget);
}

So it is possible that we've already disabled the pullup before running
the disable() callbacks in the function drivers.  The disable()
callbacks usually are the ones responsible for calling usb_ep_disable(),
where we clear the desc field.  This means there is a brief period where
the pullups_connected = 0, but we still have valid ep desc, as it has
not been disabled yet.

Also, for function drivers like mass storage, the fsg_disable() routine
defers the actual usb_ep_disable() call to the fsg_thread, so its not
always ensured that the disconnect() execution would result in the
usb_ep_disable() to occur synchronously.

>> @@ -1926,6 +1926,21 @@ static int dwc3_gadget_set_selfpowered(struct 
>> usb_gadget *g,
>>  return 0;
>>  }
>>  
>> +static void dwc3_stop_active_transfers(struct dwc3 *dwc)
>> +{
>> +u32 epnum;
>> +
>> +for (epnum = 2; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
> 
> dwc3 knows the number of endpoints available in the HW. Use dwc->num_eps
> instead.
> 

Sure, will do.

>> @@ -1971,6 +1986,8 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int 
>> is_on, int suspend)
>>  return 0;
>>  }
>>  
>> +static void __dwc3_gadget_stop(struct dwc3 *dwc);
>> +
>>  static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
>>  {
>>  struct dwc3 *dwc = gadget_to_dwc(g);
>> @@ -1994,9 +2011,37 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, 
>> int is_on)
>>  }
>>  }
>>  
>> +/*
>> + * Synchronize and disable any further event handling while controller
>> + * is being enabled/disabled.
>> + */
>> +disable_irq(dwc->irq_gadget);
> 
> why isn't dwc3_gadget_disable_irq() enough?
> 
>>  spin_lock_irqsave(>lock, flags);
> 
> spin_lock_irqsave() will disable interrupts, why disable_irq() above?
> 

In the discussion I had with Thinh, the concern was that with the newly
added code to override the lpos here, if the interrupt routine
(dwc3_check_event_buf()) runs, then it will reference the lpos for
copying the event buffer contents to the event cache, and potentially
process events.  There is no locking in place, so it could be possible
to have both run in parallel.

Hence, the reason if there was already a pending IRQ triggered, the
dwc3_gadget_disable_irq() won't ensure the IRQ is handled.  We can do
something like:
if (!is_on)
dwc3_gadget_disable_irq()
synchronize_irq()
spin_lock_irqsave()
if(!is_on) {
...

But the logic to only apply this on the pullup removal case is a little
messy.  Also, from my understanding, the spin_lock_irqsave() will only
disable the local CPU IRQs, but not the interrupt line on the GIC, which
means other CPUs can handle it, unless we explicitly set the IRQ
affinity to CPUX.

>> +/* Controller is not halted until pending events are acknowledged */
>> +if (!is_on) {
>> +u32 count;
>> +
>> +/*
>> + * The databook explicitly mentions for a device-initiated
>> + * disconnect sequence, the SW needs to ensure that it ends any
>> + * active transfers.
>> + */
> 
> make this a little better by mentioning the version and section of the
> databook you're reading. That makes it easier for future
> reference. Also, use an actual quote from the databook, along the lines
> of:
> 
>   /*
>  * Synopsys DesignWare Cores USB3 Da

Re: [prometheus-users] Can prometheurs scrape :/deamon/metrics ?

2020-09-08 Thread Wesley Peng

I am sure it can. b/c I just finished a monitoring like this form. :)


Rodolphe Ghio wrote:

I was wondering if prometheus can scrape a target like this ?
:/deamon/metrics, cause I've tried many times and prometheus
doesn't want to start anymore.


--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/65e5ab8b-793e-3e56-0ca5-085adf5b8d74%40gmx.ie.


Re: [prometheus-users] Using alert manager with external receiver

2020-09-08 Thread Wesley Peng

Hi

Nina Sc wrote:

Is there a way to use the alert manager to send an alert to external API,
I mean instead using excage server or slack etc, I will provide an endpoint
like `https://mypullendpoint.host.com` and the alert manager will send the
alert to this URL ?


Yes you need to define receivers and webhook.

such as:

receivers:
- name: "alerta"
  webhook_configs:
  - url: 'http://localhost:8080/webhooks/prometheus'
send_resolved: true


please refer these:
https://alerta.io/
https://github.com/alerta/prometheus-config


Regards.

--
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/375f41af-8bfb-e0c6-c4ec-38a0c6ca0847%40gmx.ie.


[PATCH] iommu/amd: Add prefetch iommu pages command build function

2020-09-05 Thread Wesley Sheng
Add function to build prefetch iommu pages command

Signed-off-by: Wesley Sheng 
---
 drivers/iommu/amd/amd_iommu_types.h |  2 ++
 drivers/iommu/amd/iommu.c   | 19 +++
 2 files changed, 21 insertions(+)

diff --git a/drivers/iommu/amd/amd_iommu_types.h 
b/drivers/iommu/amd/amd_iommu_types.h
index baa31cd2411c..73734a0c4679 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -173,6 +173,7 @@
 #define CMD_INV_IOMMU_PAGES0x03
 #define CMD_INV_IOTLB_PAGES0x04
 #define CMD_INV_IRT0x05
+#define CMD_PF_IOMMU_PAGES 0x06
 #define CMD_COMPLETE_PPR   0x07
 #define CMD_INV_ALL0x08
 
@@ -181,6 +182,7 @@
 #define CMD_INV_IOMMU_PAGES_SIZE_MASK  0x01
 #define CMD_INV_IOMMU_PAGES_PDE_MASK   0x02
 #define CMD_INV_IOMMU_PAGES_GN_MASK0x04
+#define CMD_PF_IOMMU_PAGES_INV_MASK0x10
 
 #define PPR_STATUS_MASK0xf
 #define PPR_STATUS_SHIFT   12
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index ba9f3dbc5b94..b3971595b0e9 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -976,6 +976,25 @@ static void build_inv_irt(struct iommu_cmd *cmd, u16 devid)
CMD_SET_TYPE(cmd, CMD_INV_IRT);
 }
 
+static void build_pf_iommu_pages(struct iommu_cmd *cmd, u64 address,
+   u16 devid, int pfcnt, bool size,
+   bool inv)
+{
+   memset(cmd, 0, sizeof(*cmd));
+
+   address &= PAGE_MASK;
+
+   cmd->data[0]  = devid;
+   cmd->data[0] |= (pfcnt & 0xff) << 24;
+   cmd->data[2]  = lower_32_bits(address);
+   cmd->data[3]  = upper_32_bits(address);
+   if (size)
+   cmd->data[2] |= CMD_INV_IOMMU_PAGES_SIZE_MASK;
+   if (inv)
+   cmd->data[2] |= CMD_PF_IOMMU_PAGES_INV_MASK;
+   CMD_SET_TYPE(cmd, CMD_PF_IOMMU_PAGES);
+}
+
 /*
  * Writes the command to the IOMMUs command buffer and informs the
  * hardware about the new command.
-- 
2.16.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] iommu/amd: Add prefetch iommu pages command build function

2020-09-05 Thread Wesley Sheng
Add function to build prefetch iommu pages command

Signed-off-by: Wesley Sheng 
---
 drivers/iommu/amd/amd_iommu_types.h |  2 ++
 drivers/iommu/amd/iommu.c   | 19 +++
 2 files changed, 21 insertions(+)

diff --git a/drivers/iommu/amd/amd_iommu_types.h 
b/drivers/iommu/amd/amd_iommu_types.h
index baa31cd2411c..73734a0c4679 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -173,6 +173,7 @@
 #define CMD_INV_IOMMU_PAGES0x03
 #define CMD_INV_IOTLB_PAGES0x04
 #define CMD_INV_IRT0x05
+#define CMD_PF_IOMMU_PAGES 0x06
 #define CMD_COMPLETE_PPR   0x07
 #define CMD_INV_ALL0x08
 
@@ -181,6 +182,7 @@
 #define CMD_INV_IOMMU_PAGES_SIZE_MASK  0x01
 #define CMD_INV_IOMMU_PAGES_PDE_MASK   0x02
 #define CMD_INV_IOMMU_PAGES_GN_MASK0x04
+#define CMD_PF_IOMMU_PAGES_INV_MASK0x10
 
 #define PPR_STATUS_MASK0xf
 #define PPR_STATUS_SHIFT   12
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index ba9f3dbc5b94..b3971595b0e9 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -976,6 +976,25 @@ static void build_inv_irt(struct iommu_cmd *cmd, u16 devid)
CMD_SET_TYPE(cmd, CMD_INV_IRT);
 }
 
+static void build_pf_iommu_pages(struct iommu_cmd *cmd, u64 address,
+   u16 devid, int pfcnt, bool size,
+   bool inv)
+{
+   memset(cmd, 0, sizeof(*cmd));
+
+   address &= PAGE_MASK;
+
+   cmd->data[0]  = devid;
+   cmd->data[0] |= (pfcnt & 0xff) << 24;
+   cmd->data[2]  = lower_32_bits(address);
+   cmd->data[3]  = upper_32_bits(address);
+   if (size)
+   cmd->data[2] |= CMD_INV_IOMMU_PAGES_SIZE_MASK;
+   if (inv)
+   cmd->data[2] |= CMD_PF_IOMMU_PAGES_INV_MASK;
+   CMD_SET_TYPE(cmd, CMD_PF_IOMMU_PAGES);
+}
+
 /*
  * Writes the command to the IOMMUs command buffer and informs the
  * hardware about the new command.
-- 
2.16.2



[PATCH] iommu/amd: Add prefetch iommu pages command build function

2020-09-05 Thread Wesley Sheng
Add function to build prefetch iommu pages command

Signed-off-by: Wesley Sheng 
---
 drivers/iommu/amd/amd_iommu_types.h |  2 ++
 drivers/iommu/amd/iommu.c   | 19 +++
 2 files changed, 21 insertions(+)

diff --git a/drivers/iommu/amd/amd_iommu_types.h 
b/drivers/iommu/amd/amd_iommu_types.h
index baa31cd2411c..73734a0c4679 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -173,6 +173,7 @@
 #define CMD_INV_IOMMU_PAGES0x03
 #define CMD_INV_IOTLB_PAGES0x04
 #define CMD_INV_IRT0x05
+#define CMD_PF_IOMMU_PAGES 0x06
 #define CMD_COMPLETE_PPR   0x07
 #define CMD_INV_ALL0x08
 
@@ -181,6 +182,7 @@
 #define CMD_INV_IOMMU_PAGES_SIZE_MASK  0x01
 #define CMD_INV_IOMMU_PAGES_PDE_MASK   0x02
 #define CMD_INV_IOMMU_PAGES_GN_MASK0x04
+#define CMD_PF_IOMMU_PAGES_INV_MASK0x10
 
 #define PPR_STATUS_MASK0xf
 #define PPR_STATUS_SHIFT   12
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index ba9f3dbc5b94..b3971595b0e9 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -976,6 +976,25 @@ static void build_inv_irt(struct iommu_cmd *cmd, u16 devid)
CMD_SET_TYPE(cmd, CMD_INV_IRT);
 }
 
+static void build_pf_iommu_pages(struct iommu_cmd *cmd, u64 address,
+   u16 devid, int pfcnt, bool size,
+   bool inv)
+{
+   memset(cmd, 0, sizeof(*cmd));
+
+   address &= PAGE_MASK;
+
+   cmd->data[0]  = devid;
+   cmd->data[0] |= (pfcnt & 0xff) << 24;
+   cmd->data[2]  = lower_32_bits(address);
+   cmd->data[3]  = upper_32_bits(address;
+   if (size)
+   cmd->data[2] |= CMD_INV_IOMMU_PAGES_SIZE_MASK;
+   if (inv)
+   cmd->data[2] |= CMD_PF_IOMMU_PAGES_INV_MASK;
+   CMD_SET_TYPE(cmd, CMD_PF_IOMMU_PAGES);
+}
+
 /*
  * Writes the command to the IOMMUs command buffer and informs the
  * hardware about the new command.
-- 
2.16.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 2/2] iommu/amd: Revise ga_tag member in struct of fields_remap

2020-09-05 Thread Wesley Sheng
Per <>
ga_tag member is only available when IRTE[GuestMode]=1, this field
should be reserved when IRTE[GuestMode]=0. So change the ga_tag
to rsvd_1 in struct of fields_remap.

Signed-off-by: Wesley Sheng 
---
 drivers/iommu/amd/amd_iommu_types.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/amd/amd_iommu_types.h 
b/drivers/iommu/amd/amd_iommu_types.h
index e5b05a97eb46..baa31cd2411c 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -832,7 +832,7 @@ union irte_ga_lo {
/* -- */
guest_mode  : 1,
destination : 24,
-   ga_tag  : 32;
+   rsvd_1  : 32;
} fields_remap;
 
/* For guest vAPIC */
-- 
2.16.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 1/2] iommu/amd: Unify reserved member naming convention in struct

2020-09-05 Thread Wesley Sheng
Unify reserved member naming convention to rsvd_x in struct

Signed-off-by: Wesley Sheng 
---
 drivers/iommu/amd/amd_iommu_types.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/amd/amd_iommu_types.h 
b/drivers/iommu/amd/amd_iommu_types.h
index 30a5d412255a..e5b05a97eb46 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -841,7 +841,7 @@ union irte_ga_lo {
no_fault: 1,
/* -- */
ga_log_intr : 1,
-   rsvd1   : 3,
+   rsvd_1  : 3,
is_run  : 1,
/* -- */
guest_mode  : 1,
-- 
2.16.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] iommu/amd: Add prefetch iommu pages command build function

2020-09-05 Thread Wesley Sheng
Add function to build prefetch iommu pages command

Signed-off-by: Wesley Sheng 
---
 drivers/iommu/amd/amd_iommu_types.h |  2 ++
 drivers/iommu/amd/iommu.c   | 19 +++
 2 files changed, 21 insertions(+)

diff --git a/drivers/iommu/amd/amd_iommu_types.h 
b/drivers/iommu/amd/amd_iommu_types.h
index baa31cd2411c..73734a0c4679 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -173,6 +173,7 @@
 #define CMD_INV_IOMMU_PAGES0x03
 #define CMD_INV_IOTLB_PAGES0x04
 #define CMD_INV_IRT0x05
+#define CMD_PF_IOMMU_PAGES 0x06
 #define CMD_COMPLETE_PPR   0x07
 #define CMD_INV_ALL0x08
 
@@ -181,6 +182,7 @@
 #define CMD_INV_IOMMU_PAGES_SIZE_MASK  0x01
 #define CMD_INV_IOMMU_PAGES_PDE_MASK   0x02
 #define CMD_INV_IOMMU_PAGES_GN_MASK0x04
+#define CMD_PF_IOMMU_PAGES_INV_MASK0x10
 
 #define PPR_STATUS_MASK0xf
 #define PPR_STATUS_SHIFT   12
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index ba9f3dbc5b94..b3971595b0e9 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -976,6 +976,25 @@ static void build_inv_irt(struct iommu_cmd *cmd, u16 devid)
CMD_SET_TYPE(cmd, CMD_INV_IRT);
 }
 
+static void build_pf_iommu_pages(struct iommu_cmd *cmd, u64 address,
+   u16 devid, int pfcnt, bool size,
+   bool inv)
+{
+   memset(cmd, 0, sizeof(*cmd));
+
+   address &= PAGE_MASK;
+
+   cmd->data[0]  = devid;
+   cmd->data[0] |= (pfcnt & 0xff) << 24;
+   cmd->data[2]  = lower_32_bits(address);
+   cmd->data[3]  = upper_32_bits(address;
+   if (size)
+   cmd->data[2] |= CMD_INV_IOMMU_PAGES_SIZE_MASK;
+   if (inv)
+   cmd->data[2] |= CMD_PF_IOMMU_PAGES_INV_MASK;
+   CMD_SET_TYPE(cmd, CMD_PF_IOMMU_PAGES);
+}
+
 /*
  * Writes the command to the IOMMUs command buffer and informs the
  * hardware about the new command.
-- 
2.16.2



[PATCH 1/2] iommu/amd: Unify reserved member naming convention in struct

2020-09-04 Thread Wesley Sheng
Unify reserved member naming convention to rsvd_x in struct

Signed-off-by: Wesley Sheng 
---
 drivers/iommu/amd/amd_iommu_types.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/amd/amd_iommu_types.h 
b/drivers/iommu/amd/amd_iommu_types.h
index 30a5d412255a..e5b05a97eb46 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -841,7 +841,7 @@ union irte_ga_lo {
no_fault: 1,
/* -- */
ga_log_intr : 1,
-   rsvd1   : 3,
+   rsvd_1  : 3,
is_run  : 1,
/* -- */
guest_mode  : 1,
-- 
2.16.2



[PATCH 2/2] iommu/amd: Revise ga_tag member in struct of fields_remap

2020-09-04 Thread Wesley Sheng
Per <>
ga_tag member is only available when IRTE[GuestMode]=1, this field
should be reserved when IRTE[GuestMode]=0. So change the ga_tag
to rsvd_1 in struct of fields_remap.

Signed-off-by: Wesley Sheng 
---
 drivers/iommu/amd/amd_iommu_types.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/amd/amd_iommu_types.h 
b/drivers/iommu/amd/amd_iommu_types.h
index e5b05a97eb46..baa31cd2411c 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -832,7 +832,7 @@ union irte_ga_lo {
/* -- */
guest_mode  : 1,
destination : 24,
-   ga_tag  : 32;
+   rsvd_1  : 32;
} fields_remap;
 
/* For guest vAPIC */
-- 
2.16.2



[PATCH v9 2/4] dt-bindings: usb: Add Qualcomm PMIC type C controller dt-binding

2020-09-04 Thread Wesley Cheng
Introduce the dt-binding for enabling USB type C orientation and role
detection using the PM8150B.  The driver will be responsible for receiving
the interrupt at a state change on the CC lines, reading the
orientation/role, and communicating this information to the remote
clients, which can include a role switch node and a type C switch.

Signed-off-by: Wesley Cheng 
---
 .../bindings/usb/qcom,pmic-typec.yaml | 108 ++
 1 file changed, 108 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml

diff --git a/Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml 
b/Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml
new file mode 100644
index ..8582ab6a3cc4
--- /dev/null
+++ b/Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml
@@ -0,0 +1,108 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/usb/qcom,pmic-typec.yaml#;
+$schema: "http://devicetree.org/meta-schemas/core.yaml#;
+
+title: Qualcomm PMIC based USB type C Detection Driver
+
+maintainers:
+  - Wesley Cheng 
+
+description: |
+  Qualcomm PMIC Type C Detect
+
+properties:
+  compatible:
+enum:
+  - qcom,pm8150b-usb-typec
+
+  reg:
+maxItems: 1
+description: Type C base address
+
+  interrupts:
+maxItems: 1
+description: CC change interrupt from PMIC
+
+  connector:
+$ref: /connector/usb-connector.yaml#
+description: Connector type for remote endpoints
+type: object
+
+properties:
+  compatible:
+enum:
+  - usb-c-connector
+
+  power-role: true
+  data-role: true
+
+  ports:
+description: Remote endpoint connections
+type: object
+
+properties:
+  port@1:
+description: Remote endpoints for the Super Speed path
+type: object
+
+properties:
+  endpoint@0:
+description: Connection to USB type C mux node
+type: object
+
+  endpoint@1:
+description: Connection to role switch node
+type: object
+
+required:
+  - compatible
+
+required:
+  - compatible
+  - reg
+  - interrupts
+  - connector
+
+additionalProperties: false
+
+examples:
+  - |
+#include 
+pm8150b {
+#address-cells = <1>;
+#size-cells = <0>;
+pm8150b_typec: typec@1500 {
+compatible = "qcom,pm8150b-usb-typec";
+reg = <0x1500>;
+interrupts = <0x2 0x15 0x5 IRQ_TYPE_EDGE_RISING>;
+
+connector {
+compatible = "usb-c-connector";
+power-role = "dual";
+data-role = "dual";
+ports {
+#address-cells = <1>;
+#size-cells = <0>;
+port@0 {
+reg = <0>;
+};
+port@1 {
+reg = <1>;
+#address-cells = <1>;
+#size-cells = <0>;
+usb3_data_ss: endpoint@0 {
+reg = <0>;
+remote-endpoint = <_ss_mux>;
+};
+usb3_role: endpoint@1 {
+reg = <1>;
+remote-endpoint = <_drd_switch>;
+};
+};
+};
+};
+};
+};
+...
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[PATCH v9 4/4] arm64: boot: dts: qcom: pm8150b: Add DTS node for PMIC VBUS booster

2020-09-04 Thread Wesley Cheng
Add the required DTS node for the USB VBUS output regulator, which is
available on PM8150B.  This will provide the VBUS source to connected
peripherals.

Signed-off-by: Wesley Cheng 
---
 arch/arm64/boot/dts/qcom/pm8150b.dtsi   | 6 ++
 arch/arm64/boot/dts/qcom/sm8150-mtp.dts | 4 
 2 files changed, 10 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/pm8150b.dtsi 
b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
index 053c659734a7..b49caa63cd4c 100644
--- a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
+++ b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
@@ -53,6 +53,12 @@ power-on@800 {
status = "disabled";
};
 
+   pm8150b_vbus: regulator@1100 {
+   compatible = "qcom,pm8150b-vbus-reg";
+   status = "disabled";
+   reg = <0x1100>;
+   };
+
pm8150b_typec: typec@1500 {
compatible = "qcom,pm8150b-usb-typec";
status = "disabled";
diff --git a/arch/arm64/boot/dts/qcom/sm8150-mtp.dts 
b/arch/arm64/boot/dts/qcom/sm8150-mtp.dts
index 6c6325c3af59..ba3b5b802954 100644
--- a/arch/arm64/boot/dts/qcom/sm8150-mtp.dts
+++ b/arch/arm64/boot/dts/qcom/sm8150-mtp.dts
@@ -409,6 +409,10 @@ _mem_phy {
vdda-pll-max-microamp = <19000>;
 };
 
+_vbus {
+   status = "okay";
+};
+
 _1_hsphy {
status = "okay";
vdda-pll-supply = <_usb_hs_core>;
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[PATCH v9 0/4] Introduce PMIC based USB type C detection

2020-09-04 Thread Wesley Cheng
Changes in v9:
 - Fixed dt-binding to reference usb-connector from the 'connector' node,
   removed properties that didn't have further constraints (than specified in
   usb-connector.yaml), and make 'reg' a required property.
 - Moved vbus_reg get call into probe(), and will fail if the regulator is not
   available.
 - Removed some references from qcom_pmic_typec, as they were not needed after
   probe().
 - Moved interrupt registration until after all used variables were initialized.

Changes in v8:
 - Simplified some property definitions, and corrected the
   connector reference in the dt binding.

Changes in v7:
 - Fixups in qcom-pmic-typec.c to remove uncesscary includes, printk formatting,
   and revising some logic operations. 

Changes in v6:
 - Removed qcom_usb_vbus-regulator.c and qcom,usb-vbus-regulator.yaml from the
   series as they have been merged on regulator.git
 - Added separate references to the usb-connector.yaml in qcom,pmic-typec.yaml
   instead of referencing the entire schema.

Changes in v5:
 - Fix dt_binding_check warning/error in qcom,pmic-typec.yaml

Changes in v4:
 - Modified qcom,pmic-typec binding to include the SS mux and the DRD remote
   endpoint nodes underneath port@1, which is assigned to the SSUSB path
   according to usb-connector
 - Added usb-connector reference to the typec dt-binding
 - Added tags to the usb type c and vbus nodes
 - Removed "qcom" tags from type c and vbus nodes
 - Modified Kconfig module name, and removed module alias from the typec driver
 
Changes in v3:
 - Fix driver reference to match driver name in Kconfig for
   qcom_usb_vbus-regulator.c
 - Utilize regulator bitmap helpers for enable, disable and is enabled calls in
   qcom_usb_vbus-regulator.c
 - Use of_get_regulator_init_data() to initialize regulator init data, and to
   set constraints in qcom_usb_vbus-regulator.c
 - Remove the need for a local device structure in the vbus regulator driver
 
Changes in v2:
 - Use devm_kzalloc() in qcom_pmic_typec_probe()
 - Add checks to make sure return value of typec_find_port_power_role() is
   valid
 - Added a VBUS output regulator driver, which will be used by the PMIC USB
   type c driver to enable/disable the source
 - Added logic to control vbus source from the PMIC type c driver when
   UFP/DFP is detected
 - Added dt-binding for this new regulator driver
 - Fixed Kconfig typec notation to match others
 - Leave type C block disabled until enabled by a platform DTS

Wesley Cheng (4):
  usb: typec: Add QCOM PMIC typec detection driver
  dt-bindings: usb: Add Qualcomm PMIC type C controller dt-binding
  arm64: boot: dts: qcom: pm8150b: Add node for USB type C block
  arm64: boot: dts: qcom: pm8150b: Add DTS node for PMIC VBUS booster

 .../bindings/usb/qcom,pmic-typec.yaml | 108 
 arch/arm64/boot/dts/qcom/pm8150b.dtsi |  13 +
 arch/arm64/boot/dts/qcom/sm8150-mtp.dts   |   4 +
 drivers/usb/typec/Kconfig |  12 +
 drivers/usb/typec/Makefile|   1 +
 drivers/usb/typec/qcom-pmic-typec.c   | 262 ++
 6 files changed, 400 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml
 create mode 100644 drivers/usb/typec/qcom-pmic-typec.c

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[PATCH v9 3/4] arm64: boot: dts: qcom: pm8150b: Add node for USB type C block

2020-09-04 Thread Wesley Cheng
The PM8150B has a dedicated USB type C block, which can be used for type C
orientation and role detection.  Create the reference node to this type C
block for further use.

Signed-off-by: Wesley Cheng 
Reviewed-by: Bjorn Andersson 
---
 arch/arm64/boot/dts/qcom/pm8150b.dtsi | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/pm8150b.dtsi 
b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
index e112e8876db6..053c659734a7 100644
--- a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
+++ b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
@@ -53,6 +53,13 @@ power-on@800 {
status = "disabled";
};
 
+   pm8150b_typec: typec@1500 {
+   compatible = "qcom,pm8150b-usb-typec";
+   status = "disabled";
+   reg = <0x1500>;
+   interrupts = <0x2 0x15 0x5 IRQ_TYPE_EDGE_RISING>;
+   };
+
pm8150b_temp: temp-alarm@2400 {
compatible = "qcom,spmi-temp-alarm";
reg = <0x2400>;
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[PATCH v9 1/4] usb: typec: Add QCOM PMIC typec detection driver

2020-09-04 Thread Wesley Cheng
The QCOM SPMI typec driver handles the role and orientation detection, and
notifies client drivers using the USB role switch framework.   It registers
as a typec port, so orientation can be communicated using the typec switch
APIs.  The driver also attains a handle to the VBUS output regulator, so it
can enable/disable the VBUS source when acting as a host/device.

Signed-off-by: Wesley Cheng 
Acked-by: Heikki Krogerus 
Reviewed-by: Stephen Boyd 
---
 drivers/usb/typec/Kconfig   |  12 ++
 drivers/usb/typec/Makefile  |   1 +
 drivers/usb/typec/qcom-pmic-typec.c | 262 
 3 files changed, 275 insertions(+)
 create mode 100644 drivers/usb/typec/qcom-pmic-typec.c

diff --git a/drivers/usb/typec/Kconfig b/drivers/usb/typec/Kconfig
index 559dd06117e7..63789cf88fce 100644
--- a/drivers/usb/typec/Kconfig
+++ b/drivers/usb/typec/Kconfig
@@ -73,6 +73,18 @@ config TYPEC_TPS6598X
  If you choose to build this driver as a dynamically linked module, the
  module will be called tps6598x.ko.
 
+config TYPEC_QCOM_PMIC
+   tristate "Qualcomm PMIC USB Type-C driver"
+   depends on ARCH_QCOM || COMPILE_TEST
+   help
+ Driver for supporting role switch over the Qualcomm PMIC.  This will
+ handle the USB Type-C role and orientation detection reported by the
+ QCOM PMIC if the PMIC has the capability to handle USB Type-C
+ detection.
+
+ It will also enable the VBUS output to connected devices when a
+ DFP connection is made.
+
 source "drivers/usb/typec/mux/Kconfig"
 
 source "drivers/usb/typec/altmodes/Kconfig"
diff --git a/drivers/usb/typec/Makefile b/drivers/usb/typec/Makefile
index 7753a5c3cd46..cceffd987643 100644
--- a/drivers/usb/typec/Makefile
+++ b/drivers/usb/typec/Makefile
@@ -6,4 +6,5 @@ obj-$(CONFIG_TYPEC_TCPM)+= tcpm/
 obj-$(CONFIG_TYPEC_UCSI)   += ucsi/
 obj-$(CONFIG_TYPEC_HD3SS3220)  += hd3ss3220.o
 obj-$(CONFIG_TYPEC_TPS6598X)   += tps6598x.o
+obj-$(CONFIG_TYPEC_QCOM_PMIC)  += qcom-pmic-typec.o
 obj-$(CONFIG_TYPEC)+= mux/
diff --git a/drivers/usb/typec/qcom-pmic-typec.c 
b/drivers/usb/typec/qcom-pmic-typec.c
new file mode 100644
index ..9fc5e69d6e82
--- /dev/null
+++ b/drivers/usb/typec/qcom-pmic-typec.c
@@ -0,0 +1,262 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define TYPEC_MISC_STATUS  0xb
+#define CC_ATTACHEDBIT(0)
+#define CC_ORIENTATION BIT(1)
+#define SNK_SRC_MODE   BIT(6)
+#define TYPEC_MODE_CFG 0x44
+#define TYPEC_DISABLE_CMD  BIT(0)
+#define EN_SNK_ONLYBIT(1)
+#define EN_SRC_ONLYBIT(2)
+#define TYPEC_VCONN_CONTROL0x46
+#define VCONN_EN_SRC   BIT(0)
+#define VCONN_EN_VAL   BIT(1)
+#define TYPEC_EXIT_STATE_CFG   0x50
+#define SEL_SRC_UPPER_REF  BIT(2)
+#define TYPEC_INTR_EN_CFG_10x5e
+#define TYPEC_INTR_EN_CFG_1_MASK   GENMASK(7, 0)
+
+struct qcom_pmic_typec {
+   struct device   *dev;
+   struct regmap   *regmap;
+   u32 base;
+
+   struct typec_port   *port;
+   struct usb_role_switch *role_sw;
+
+   struct regulator*vbus_reg;
+   boolvbus_enabled;
+};
+
+static void qcom_pmic_typec_enable_vbus_regulator(struct qcom_pmic_typec
+   *qcom_usb, bool enable)
+{
+   int ret;
+
+   if (enable == qcom_usb->vbus_enabled)
+   return;
+
+   if (enable) {
+   ret = regulator_enable(qcom_usb->vbus_reg);
+   if (ret)
+   return;
+   } else {
+   ret = regulator_disable(qcom_usb->vbus_reg);
+   if (ret)
+   return;
+   }
+   qcom_usb->vbus_enabled = enable;
+}
+
+static void qcom_pmic_typec_check_connection(struct qcom_pmic_typec *qcom_usb)
+{
+   enum typec_orientation orientation;
+   enum usb_role role;
+   unsigned int stat;
+   bool enable_vbus;
+
+   regmap_read(qcom_usb->regmap, qcom_usb->base + TYPEC_MISC_STATUS,
+   );
+
+   if (stat & CC_ATTACHED) {
+   orientation = (stat & CC_ORIENTATION) ?
+   TYPEC_ORIENTATION_REVERSE :
+   TYPEC_ORIENTATION_NORMAL;
+   typec_set_orientation(qcom_usb->port, orientation);
+
+   role = (stat & SNK_SRC_MODE) ? USB_ROLE_HOST : USB_ROLE_DEVICE;
+   if (role == USB_ROLE_HOST)
+   enable_vbus = true;
+   e

Re: [PATCH v8 4/4] arm64: boot: dts: qcom: pm8150b: Add DTS node for PMIC VBUS booster

2020-09-03 Thread Wesley Cheng



On 8/30/2020 10:52 AM, Bjorn Andersson wrote:
> On Thu 20 Aug 07:47 UTC 2020, Wesley Cheng wrote:
> 
>>
>>
>> On 8/12/2020 2:34 AM, Sergei Shtylyov wrote:
>>> Hello!
>>>
>>> On 12.08.2020 10:19, Wesley Cheng wrote:
>>>
>>>> Add the required DTS node for the USB VBUS output regulator, which is
>>>> available on PM8150B.  This will provide the VBUS source to connected
>>>> peripherals.
>>>>
>>>> Signed-off-by: Wesley Cheng 
>>>> ---
>>>>   arch/arm64/boot/dts/qcom/pm8150b.dtsi   | 6 ++
>>>>   arch/arm64/boot/dts/qcom/sm8150-mtp.dts | 4 
>>>>   2 files changed, 10 insertions(+)
>>>>
>>>> diff --git a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
>>>> b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
>>>> index 053c659734a7..9e560c1ca30d 100644
>>>> --- a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
>>>> +++ b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
>>>> @@ -53,6 +53,12 @@ power-on@800 {
>>>>   status = "disabled";
>>>>   };
>>>>   +    pm8150b_vbus: dcdc@1100 {
>>>
>>>    s/dcdc/regulator/? What is "dcdc", anyway?
>>>    The device nodes must have the generic names, according to the DT spec.
>>>
>>
>> Hi Sergei,
>>
>> Thanks for the comment!
>>
>> DCDC is the label that we use for the DC to DC converter block, since
>> the VBUS booster will output 5V to the connected devices.  Would it make
>> more sense to have "dc-dc?"
>>
> 
> At this level it's just a regulator at 0x1100, so it should be
> "regulator@1100". If you would like a more useful name in the running
> system you should be able to use the "regulator-name" property.
> 
> Regards,
> Bjorn
> 

Hi Bjorn,

Thanks for the suggestion.  Sounds good, I will just use the "regulator"
name for now.

Thanks
Wesley

>> Thanks
>> Wesley
>>
>>>> +    compatible = "qcom,pm8150b-vbus-reg";
>>>> +    status = "disabled";
>>>> +    reg = <0x1100>;
>>>> +    };
>>>> +
>>>>   pm8150b_typec: typec@1500 {
>>>>   compatible = "qcom,pm8150b-usb-typec";
>>>>   status = "disabled";
>>> [...]
>>>
>>> MBR, Sergei
>>
>> -- 
>> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
>> a Linux Foundation Collaborative Project

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH v8 1/4] usb: typec: Add QCOM PMIC typec detection driver

2020-09-03 Thread Wesley Cheng



On 8/30/2020 11:54 AM, Bjorn Andersson wrote:
> On Wed 12 Aug 07:19 UTC 2020, Wesley Cheng wrote:
> 
>> The QCOM SPMI typec driver handles the role and orientation detection, and
>> notifies client drivers using the USB role switch framework.   It registers
>> as a typec port, so orientation can be communicated using the typec switch
>> APIs.  The driver also attains a handle to the VBUS output regulator, so it
>> can enable/disable the VBUS source when acting as a host/device.
>>
>> Signed-off-by: Wesley Cheng 
>> Acked-by: Heikki Krogerus 
>> Reviewed-by: Stephen Boyd 
>> ---
>>  drivers/usb/typec/Kconfig   |  12 ++
>>  drivers/usb/typec/Makefile  |   1 +
>>  drivers/usb/typec/qcom-pmic-typec.c | 271 
>>  3 files changed, 284 insertions(+)
>>  create mode 100644 drivers/usb/typec/qcom-pmic-typec.c
>>
>> diff --git a/drivers/usb/typec/Kconfig b/drivers/usb/typec/Kconfig
>> index 559dd06117e7..63789cf88fce 100644
>> --- a/drivers/usb/typec/Kconfig
>> +++ b/drivers/usb/typec/Kconfig
>> @@ -73,6 +73,18 @@ config TYPEC_TPS6598X
>>If you choose to build this driver as a dynamically linked module, the
>>module will be called tps6598x.ko.
>>  
>> +config TYPEC_QCOM_PMIC
>> +tristate "Qualcomm PMIC USB Type-C driver"
>> +depends on ARCH_QCOM || COMPILE_TEST
>> +help
>> +  Driver for supporting role switch over the Qualcomm PMIC.  This will
>> +  handle the USB Type-C role and orientation detection reported by the
>> +  QCOM PMIC if the PMIC has the capability to handle USB Type-C
>> +  detection.
>> +
>> +  It will also enable the VBUS output to connected devices when a
>> +  DFP connection is made.
>> +
>>  source "drivers/usb/typec/mux/Kconfig"
>>  
>>  source "drivers/usb/typec/altmodes/Kconfig"
>> diff --git a/drivers/usb/typec/Makefile b/drivers/usb/typec/Makefile
>> index 7753a5c3cd46..cceffd987643 100644
>> --- a/drivers/usb/typec/Makefile
>> +++ b/drivers/usb/typec/Makefile
>> @@ -6,4 +6,5 @@ obj-$(CONFIG_TYPEC_TCPM) += tcpm/
>>  obj-$(CONFIG_TYPEC_UCSI)+= ucsi/
>>  obj-$(CONFIG_TYPEC_HD3SS3220)   += hd3ss3220.o
>>  obj-$(CONFIG_TYPEC_TPS6598X)+= tps6598x.o
>> +obj-$(CONFIG_TYPEC_QCOM_PMIC)   += qcom-pmic-typec.o
>>  obj-$(CONFIG_TYPEC) += mux/
>> diff --git a/drivers/usb/typec/qcom-pmic-typec.c 
>> b/drivers/usb/typec/qcom-pmic-typec.c
>> new file mode 100644
>> index ..20b2b6502cb3
>> --- /dev/null
>> +++ b/drivers/usb/typec/qcom-pmic-typec.c
>> @@ -0,0 +1,271 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (c) 2020, The Linux Foundation. All rights reserved.
>> + */
>> +
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
> 
> Please sort these alphabetically.
> 

Hi Bjorn,

Thanks for all the inputs.  Will make all the suggested changes.

Thanks
Wesley

>> +
>> +#define TYPEC_MISC_STATUS   0xb
>> +#define CC_ATTACHED BIT(0)
>> +#define CC_ORIENTATION  BIT(1)
>> +#define SNK_SRC_MODEBIT(6)
>> +#define TYPEC_MODE_CFG  0x44
>> +#define TYPEC_DISABLE_CMD   BIT(0)
>> +#define EN_SNK_ONLY BIT(1)
>> +#define EN_SRC_ONLY BIT(2)
>> +#define TYPEC_VCONN_CONTROL 0x46
>> +#define VCONN_EN_SRCBIT(0)
>> +#define VCONN_EN_VALBIT(1)
>> +#define TYPEC_EXIT_STATE_CFG0x50
>> +#define SEL_SRC_UPPER_REF   BIT(2)
>> +#define TYPEC_INTR_EN_CFG_1 0x5e
>> +#define TYPEC_INTR_EN_CFG_1_MASKGENMASK(7, 0)
>> +
>> +struct qcom_pmic_typec {
>> +struct device   *dev;
>> +struct fwnode_handle*fwnode;
>> +struct regmap   *regmap;
>> +u32 base;
>> +
>> +struct typec_capability *cap;
>> +struct typec_port   *port;
>> +struct usb_role_switch *role_sw;
>> +
>> +struct regulator*vbus_reg;
>> +boolvbus_enabled;
>> +};
>> +
>> +static void qcom_pmic_typec_enable_vbus_regulator(struct qcom_pmic_typec
>> +*qcom_usb, bool enable)
>> +{
>&g

[PATCH v3] usb: dwc3: Stop active transfers before halting the controller

2020-09-03 Thread Wesley Cheng
In the DWC3 databook, for a device initiated disconnect or bus reset, the
driver is required to send dependxfer commands for any pending transfers.
In addition, before the controller can move to the halted state, the SW
needs to acknowledge any pending events.  If the controller is not halted
properly, there is a chance the controller will continue accessing stale or
freed TRBs and buffers.

Signed-off-by: Wesley Cheng 

---
Changes in v3:
 - Removed DWC3_EP_ENABLED check from dwc3_gadget_stop_active_transfers()
   as dwc3_stop_active_transfer() has a check already in place.
 - Calling __dwc3_gadget_stop() which ensures that DWC3 interrupt events
   are cleared, and ep0 eps are cleared for the pullup disabled case.  Not
   required to call __dwc3_gadget_start() on pullup enable, as the
   composite driver will execute udc_start() before calling pullup().

Changes in v2:
 - Moved cleanup code to the pullup() API to differentiate between device
   disconnect and hibernation.
 - Added cleanup code to the bus reset case as well.
 - Verified the move to pullup() did not reproduce the problen using the
   same test sequence.

Verified fix by adding a check for ETIMEDOUT during the run stop call.
Shell script writing to the configfs UDC file to trigger disconnect and
connect.  Batch script to have PC execute data transfers over adb (ie adb
push)  After a few iterations, we'd run into a scenario where the
controller wasn't halted.  With the following change, no failed halts after
many iterations.
---
 drivers/usb/dwc3/ep0.c|  2 +-
 drivers/usb/dwc3/gadget.c | 49 ++-
 2 files changed, 49 insertions(+), 2 deletions(-)

diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
index 59f2e8c31bd1..456aa87e8778 100644
--- a/drivers/usb/dwc3/ep0.c
+++ b/drivers/usb/dwc3/ep0.c
@@ -197,7 +197,7 @@ int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct 
usb_request *request,
int ret;
 
spin_lock_irqsave(>lock, flags);
-   if (!dep->endpoint.desc) {
+   if (!dep->endpoint.desc || !dwc->pullups_connected) {
dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
dep->name);
ret = -ESHUTDOWN;
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 3ab6f118c508..73bda7eaa773 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -1516,7 +1516,7 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, 
struct dwc3_request *req)
 {
struct dwc3 *dwc = dep->dwc;
 
-   if (!dep->endpoint.desc) {
+   if (!dep->endpoint.desc || !dwc->pullups_connected) {
dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
dep->name);
return -ESHUTDOWN;
@@ -1926,6 +1926,21 @@ static int dwc3_gadget_set_selfpowered(struct usb_gadget 
*g,
return 0;
 }
 
+static void dwc3_stop_active_transfers(struct dwc3 *dwc)
+{
+   u32 epnum;
+
+   for (epnum = 2; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
+   struct dwc3_ep *dep;
+
+   dep = dwc->eps[epnum];
+   if (!dep)
+   continue;
+
+   dwc3_remove_requests(dwc, dep);
+   }
+}
+
 static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend)
 {
u32 reg;
@@ -1971,6 +1986,8 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int 
is_on, int suspend)
return 0;
 }
 
+static void __dwc3_gadget_stop(struct dwc3 *dwc);
+
 static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
 {
struct dwc3 *dwc = gadget_to_dwc(g);
@@ -1994,9 +2011,37 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int 
is_on)
}
}
 
+   /*
+* Synchronize and disable any further event handling while controller
+* is being enabled/disabled.
+*/
+   disable_irq(dwc->irq_gadget);
spin_lock_irqsave(>lock, flags);
+
+   /* Controller is not halted until pending events are acknowledged */
+   if (!is_on) {
+   u32 count;
+
+   /*
+* The databook explicitly mentions for a device-initiated
+* disconnect sequence, the SW needs to ensure that it ends any
+* active transfers.
+*/
+   dwc3_stop_active_transfers(dwc);
+   __dwc3_gadget_stop(dwc);
+
+   count = dwc3_readl(dwc->regs, DWC3_GEVNTCOUNT(0));
+   count &= DWC3_GEVNTCOUNT_MASK;
+   if (count > 0) {
+   dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), count);
+   dwc->ev_buf->lpos = (dwc->ev_buf->lpos + count) %
+   dwc->ev_buf->length;
+   }
+   

Re: [google-appengine] Re: Obtaining refresh_token through offline_access scope to be used with IAP

2020-09-02 Thread wesley chun
That's great to hear José. I'm a bit new to using JWT tokens myself (having
only used OAuth2 access & refresh tokens) when talking to Google APIs. One
big advantage to JWT tokens is that you save API calls to both exchange a
JWT for an access token as well as your case of needing to send a refresh
token to get a new/valid access token. Cheers!

On Wed, Sep 2, 2020 at 1:04 AM 'José Cantera' via Google App Engine <
google-appengine@googlegroups.com> wrote:

> Yes, Wesley is right, in the end a refresh_token is not that necessary in
> this case, it suffices with self-signing a new JWT token with the exp
> timestamp updated,
>
> thanks!
>
> On Wed, Sep 2, 2020 at 3:26 AM wesley chun  wrote:
>
>> Hi, I may not be correct in my understanding but believe that refresh
>> tokens are only used in cases where you're using OAuth2 access tokens for
>> authorization. Since you're using a self-signed JWT instead of an access
>> token
>> <https://developers.google.com/identity/protocols/oauth2/service-account#jwt-auth>,
>> I don't think the useful reference that David linked to applies in your
>> case (and BTW, this is independent of whether you're using IAP or not).
>>
>> Since you're signing the JWT token, can't you simply resign it with an
>> updated timestamp in your JWT payload (as shown a bit further down on the
>> page I just linked to above)? (I believe that'll have the same effect of
>> using a refresh token to get an updated access token.)
>>
>> On Tue, Sep 1, 2020 at 2:00 PM 'David (Cloud Platform Support)' via
>> Google App Engine  wrote:
>>
>>> This documentation
>>> <https://developers.google.com/identity/protocols/oauth2/web-server#offline>
>>> about refreshing an access token (offline access) using Google's
>>> authorization server could be helpful.
>>>
>>> On Monday, August 31, 2020 at 8:26:25 AM UTC-4 jose.c...@iota.org wrote:
>>>
>>>> I am using IAP to protect a Web API Application. I have enabled a
>>>> service account to get access to the APIs through an id_token. I am able to
>>>> obtain an id_token (JWT) by signing a JWT (using the keys of my service
>>>> account) with the following assertions
>>>> {
>>>>  "iss": "xx.iam.gserviceaccount.com",
>>>>  "sub": "xx.iam.gserviceaccount.com",
>>>>  "aud": "https://oauth2.googleapis.com/token;,
>>>>  "target_audience": "my_application_client_id",
>>>>  "iat": 1598702078,
>>>>  "exp": 1598705593
>>>>  }
>>>>
>>>> and then Posting to the token service as follows
>>>> curl --location --request POST 'https://oauth2.googleapis.com/token' \
>>>> --header 'Content-Type: application/x-www-form-urlencoded' \
>>>>  --data-urlencode 'assertion=’
>>>>  --data-urlencode
>>>> 'grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer' \
>>>> --data-urlencode 'scope=openid’
>>>>
>>>> Now I would like to also obtain a refresh_token and has been
>>>> impossible. I have tried with *scope=openid offline_access* but no
>>>> luck. Is *offline_access* implemented in the Google Auth Server? Any
>>>> other mechanism to obtain a refresh_token?
>>>>
>>>> Thank you very much
>>>>
>>>> *IOTA Foundation*
>>>> c/o Nextland
>>>> Strassburgerstraße 55
>>>> 10405 Berlin, Germany
>>>>
>>>> Board of Directors: Dominik Schiener, David Sønstebø, Serguei Popov,
>>>> Navin Ramachandran
>>>> ID/Foundation No.: 3416/1234/2 (Foundation Register of Berlin)
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Google App Engine" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to google-appengine+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/google-appengine/9687fedb-925a-42ef-9c26-439febdf168fn%40googlegroups.com
>>> <https://groups.google.com/d/msgid/google-appengine/9687fedb-925a-42ef-9c26-439febdf168fn%40googlegroups.com?utm_medium=email_source=footer>
>>> .
>>>
>>
>>
>> --
>> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
>> "A computer never does what you want... only what you tell it."
>> wesley chun :: @wescpy <http://twitter.com/w

[Mahara-contributors] [Bug 1893941] [NEW] Elasticsearch on Localhost and Proxy entry

2020-09-02 Thread Wesley Richards
Public bug reported:

Hello everyone,
I just discovered a nasty bug.
If you configure a proxy for Mahara and use Elasticsearch on localhost, the 
proxy entry interferes with the access to Elasticsearch. Mahara can then no 
longer connect to Elasicsearch.
Mahara may try to connect to Elasicsearch through the proxy but localhost on 
the Mahara server is different than localhost on the proxy.
Please excuse my bad English but I hope you understand what I mean ;-)

Mahara: 20.04.1
Elasitcsearch: 6.8.12

Nice greetings from Kassel

Wesley

** Affects: mahara
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Mahara
Contributors, which is subscribed to Mahara.
Matching subscriptions: Subscription for all Mahara Contributors -- please ask 
on #mahara-dev or mahara.org forum before editing or unsubscribing it!
https://bugs.launchpad.net/bugs/1893941

Title:
  Elasticsearch on Localhost and Proxy entry

Status in Mahara:
  New

Bug description:
  Hello everyone,
  I just discovered a nasty bug.
  If you configure a proxy for Mahara and use Elasticsearch on localhost, the 
proxy entry interferes with the access to Elasticsearch. Mahara can then no 
longer connect to Elasicsearch.
  Mahara may try to connect to Elasicsearch through the proxy but localhost on 
the Mahara server is different than localhost on the proxy.
  Please excuse my bad English but I hope you understand what I mean ;-)

  Mahara: 20.04.1
  Elasitcsearch: 6.8.12

  Nice greetings from Kassel

  Wesley

To manage notifications about this bug go to:
https://bugs.launchpad.net/mahara/+bug/1893941/+subscriptions

___
Mailing list: https://launchpad.net/~mahara-contributors
Post to : mahara-contributors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~mahara-contributors
More help   : https://help.launchpad.net/ListHelp


Re: [PATCH v2] usb: dwc3: Stop active transfers before halting the controller

2020-09-01 Thread Wesley Cheng



On 9/1/2020 3:14 PM, Wesley Cheng wrote:
> 
> 
> On 8/29/2020 2:35 PM, Thinh Nguyen wrote:
>> Wesley Cheng wrote:
>>> In the DWC3 databook, for a device initiated disconnect or bus reset, the
>>> driver is required to send dependxfer commands for any pending transfers.
>>> In addition, before the controller can move to the halted state, the SW
>>> needs to acknowledge any pending events.  If the controller is not halted
>>> properly, there is a chance the controller will continue accessing stale or
>>> freed TRBs and buffers.
>>>
>>> Signed-off-by: Wesley Cheng 
>>>
>>> ---
>>> Changes in v2:
>>>  - Moved cleanup code to the pullup() API to differentiate between device
>>>disconnect and hibernation.
>>>  - Added cleanup code to the bus reset case as well.
>>>  - Verified the move to pullup() did not reproduce the problen using the
>>>same test sequence.
>>>
>>> Verified fix by adding a check for ETIMEDOUT during the run stop call.
>>> Shell script writing to the configfs UDC file to trigger disconnect and
>>> connect.  Batch script to have PC execute data transfers over adb (ie adb
>>> push)  After a few iterations, we'd run into a scenario where the
>>> controller wasn't halted.  With the following change, no failed halts after
>>> many iterations.
>>> ---
>>>  drivers/usb/dwc3/ep0.c|  2 +-
>>>  drivers/usb/dwc3/gadget.c | 52 ++-
>>>  2 files changed, 52 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
>>> index 59f2e8c31bd1..456aa87e8778 100644
>>> --- a/drivers/usb/dwc3/ep0.c
>>> +++ b/drivers/usb/dwc3/ep0.c
>>> @@ -197,7 +197,7 @@ int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct 
>>> usb_request *request,
>>> int ret;
>>>  
>>> spin_lock_irqsave(>lock, flags);
>>> -   if (!dep->endpoint.desc) {
>>> +   if (!dep->endpoint.desc || !dwc->pullups_connected) {
>>> dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
>>> dep->name);
>>> ret = -ESHUTDOWN;
>>> diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
>>> index 3ab6f118c508..df8d89d6bdc9 100644
>>> --- a/drivers/usb/dwc3/gadget.c
>>> +++ b/drivers/usb/dwc3/gadget.c
>>> @@ -1516,7 +1516,7 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep 
>>> *dep, struct dwc3_request *req)
>>>  {
>>> struct dwc3 *dwc = dep->dwc;
>>>  
>>> -   if (!dep->endpoint.desc) {
>>> +   if (!dep->endpoint.desc || !dwc->pullups_connected) {
>>> dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
>>> dep->name);
>>> return -ESHUTDOWN;
>>> @@ -1926,6 +1926,24 @@ static int dwc3_gadget_set_selfpowered(struct 
>>> usb_gadget *g,
>>> return 0;
>>>  }
>>>  
>>> +static void dwc3_stop_active_transfers(struct dwc3 *dwc)
>>> +{
>>> +   u32 epnum;
>>> +
>>> +   for (epnum = 2; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
>>> +   struct dwc3_ep *dep;
>>> +
>>> +   dep = dwc->eps[epnum];
>>> +   if (!dep)
>>> +   continue;
>>> +
>>> +   if (!(dep->flags & DWC3_EP_ENABLED))
>>> +   continue;
>>
>> Don't do the enabled check here. Let the dwc3_stop_active_transfer() do
>> that checking.
>>
> 
> Hi Thinh,
> 
> Thanks for the detailed review, as always.  Got it, we can allow that to
> catch it based off the DWC3_EP_TRANSFER_STARTED.
> 
>>> +
>>> +   dwc3_remove_requests(dwc, dep);
>>> +   }
>>> +}
>>> +
>>>  static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend)
>>>  {
>>> u32 reg;
>>> @@ -1994,9 +2012,39 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, 
>>> int is_on)
>>> }
>>> }
>>>  
>>> +   /*
>>> +* Synchronize and disable any further event handling while controller
>>> +* is being enabled/disabled.
>>> +*/
>>> +   disable_irq(dwc->irq_gadget);
>>
>> I think it's better to do dwc3_gadget_disable_irq(). Thi

Re: [google-appengine] Re: Obtaining refresh_token through offline_access scope to be used with IAP

2020-09-01 Thread wesley chun
Hi, I may not be correct in my understanding but believe that refresh
tokens are only used in cases where you're using OAuth2 access tokens for
authorization. Since you're using a self-signed JWT instead of an access
token
<https://developers.google.com/identity/protocols/oauth2/service-account#jwt-auth>,
I don't think the useful reference that David linked to applies in your
case (and BTW, this is independent of whether you're using IAP or not).

Since you're signing the JWT token, can't you simply resign it with an
updated timestamp in your JWT payload (as shown a bit further down on the
page I just linked to above)? (I believe that'll have the same effect of
using a refresh token to get an updated access token.)

On Tue, Sep 1, 2020 at 2:00 PM 'David (Cloud Platform Support)' via Google
App Engine  wrote:

> This documentation
> <https://developers.google.com/identity/protocols/oauth2/web-server#offline>
> about refreshing an access token (offline access) using Google's
> authorization server could be helpful.
>
> On Monday, August 31, 2020 at 8:26:25 AM UTC-4 jose.c...@iota.org wrote:
>
>> I am using IAP to protect a Web API Application. I have enabled a service
>> account to get access to the APIs through an id_token. I am able to obtain
>> an id_token (JWT) by signing a JWT (using the keys of my service account)
>> with the following assertions
>> {
>>  "iss": "xx.iam.gserviceaccount.com",
>>  "sub": "xx.iam.gserviceaccount.com",
>>  "aud": "https://oauth2.googleapis.com/token;,
>>  "target_audience": "my_application_client_id",
>>  "iat": 1598702078,
>>  "exp": 1598705593
>>  }
>>
>> and then Posting to the token service as follows
>> curl --location --request POST 'https://oauth2.googleapis.com/token' \
>> --header 'Content-Type: application/x-www-form-urlencoded' \
>>  --data-urlencode 'assertion=’
>>  --data-urlencode
>> 'grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer' \
>> --data-urlencode 'scope=openid’
>>
>> Now I would like to also obtain a refresh_token and has been impossible.
>> I have tried with *scope=openid offline_access* but no luck. Is
>> *offline_access* implemented in the Google Auth Server? Any other
>> mechanism to obtain a refresh_token?
>>
>> Thank you very much
>>
>> *IOTA Foundation*
>> c/o Nextland
>> Strassburgerstraße 55
>> 10405 Berlin, Germany
>>
>> Board of Directors: Dominik Schiener, David Sønstebø, Serguei Popov,
>> Navin Ramachandran
>> ID/Foundation No.: 3416/1234/2 (Foundation Register of Berlin)
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to google-appengine+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/google-appengine/9687fedb-925a-42ef-9c26-439febdf168fn%40googlegroups.com
> <https://groups.google.com/d/msgid/google-appengine/9687fedb-925a-42ef-9c26-439febdf168fn%40googlegroups.com?utm_medium=email_source=footer>
> .
>


-- 
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"A computer never does what you want... only what you tell it."
wesley chun :: @wescpy <http://twitter.com/wescpy> :: Software
Architect & Engineer
Developer Advocate at Google Cloud by day; at night...
Python training & consulting : http://CyberwebConsulting.com
"Core Python" books : http://CorePython.com
Python blog: http://wescpy.blogspot.com

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAB6eaA7w%3DYu9dOZ7ChhgE8Bf9Bb%2BjFxoMTsMbuTuHZ7dURqDrw%40mail.gmail.com.


Re: [PATCH v2] usb: dwc3: Stop active transfers before halting the controller

2020-09-01 Thread Wesley Cheng



On 8/29/2020 2:35 PM, Thinh Nguyen wrote:
> Wesley Cheng wrote:
>> In the DWC3 databook, for a device initiated disconnect or bus reset, the
>> driver is required to send dependxfer commands for any pending transfers.
>> In addition, before the controller can move to the halted state, the SW
>> needs to acknowledge any pending events.  If the controller is not halted
>> properly, there is a chance the controller will continue accessing stale or
>> freed TRBs and buffers.
>>
>> Signed-off-by: Wesley Cheng 
>>
>> ---
>> Changes in v2:
>>  - Moved cleanup code to the pullup() API to differentiate between device
>>disconnect and hibernation.
>>  - Added cleanup code to the bus reset case as well.
>>  - Verified the move to pullup() did not reproduce the problen using the
>>same test sequence.
>>
>> Verified fix by adding a check for ETIMEDOUT during the run stop call.
>> Shell script writing to the configfs UDC file to trigger disconnect and
>> connect.  Batch script to have PC execute data transfers over adb (ie adb
>> push)  After a few iterations, we'd run into a scenario where the
>> controller wasn't halted.  With the following change, no failed halts after
>> many iterations.
>> ---
>>  drivers/usb/dwc3/ep0.c|  2 +-
>>  drivers/usb/dwc3/gadget.c | 52 ++-
>>  2 files changed, 52 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
>> index 59f2e8c31bd1..456aa87e8778 100644
>> --- a/drivers/usb/dwc3/ep0.c
>> +++ b/drivers/usb/dwc3/ep0.c
>> @@ -197,7 +197,7 @@ int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct 
>> usb_request *request,
>>  int ret;
>>  
>>  spin_lock_irqsave(>lock, flags);
>> -if (!dep->endpoint.desc) {
>> +if (!dep->endpoint.desc || !dwc->pullups_connected) {
>>  dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
>>  dep->name);
>>  ret = -ESHUTDOWN;
>> diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
>> index 3ab6f118c508..df8d89d6bdc9 100644
>> --- a/drivers/usb/dwc3/gadget.c
>> +++ b/drivers/usb/dwc3/gadget.c
>> @@ -1516,7 +1516,7 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, 
>> struct dwc3_request *req)
>>  {
>>  struct dwc3 *dwc = dep->dwc;
>>  
>> -if (!dep->endpoint.desc) {
>> +if (!dep->endpoint.desc || !dwc->pullups_connected) {
>>  dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
>>  dep->name);
>>  return -ESHUTDOWN;
>> @@ -1926,6 +1926,24 @@ static int dwc3_gadget_set_selfpowered(struct 
>> usb_gadget *g,
>>  return 0;
>>  }
>>  
>> +static void dwc3_stop_active_transfers(struct dwc3 *dwc)
>> +{
>> +u32 epnum;
>> +
>> +for (epnum = 2; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
>> +struct dwc3_ep *dep;
>> +
>> +dep = dwc->eps[epnum];
>> +if (!dep)
>> +continue;
>> +
>> +if (!(dep->flags & DWC3_EP_ENABLED))
>> +continue;
> 
> Don't do the enabled check here. Let the dwc3_stop_active_transfer() do
> that checking.
> 

Hi Thinh,

Thanks for the detailed review, as always.  Got it, we can allow that to
catch it based off the DWC3_EP_TRANSFER_STARTED.

>> +
>> +dwc3_remove_requests(dwc, dep);
>> +}
>> +}
>> +
>>  static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend)
>>  {
>>  u32 reg;
>> @@ -1994,9 +2012,39 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, 
>> int is_on)
>>  }
>>  }
>>  
>> +/*
>> + * Synchronize and disable any further event handling while controller
>> + * is being enabled/disabled.
>> + */
>> +disable_irq(dwc->irq_gadget);
> 
> I think it's better to do dwc3_gadget_disable_irq(). This only stops
> handling events. Although it's unlikely, the controller may still
> generate events before it's halted.
> 

I think its better if we can do both.  At least with the disable_irq()
call present, we can ensure the irq handlers are complete, or we can do
as Felipe suggested, and first disable the controller events (using
dwc3_gadget_disable_irq()) and then calling synchronize_irq().

The concern I had is the pullup() API updat

Re: [rdo-users] [TripleO][Ussuri] image prepare, cannot download containers image prepare recently

2020-08-31 Thread Wesley Hayutin
On Mon, Aug 31, 2020 at 1:23 PM Ruslanas Gžibovskis 
wrote:

> Hi all,
>
> I have noticed, that recently my undercloud is not able to download images
> [0]. I have provided newly generated containers-prepare-parameter.yaml and
> outputs from container image prepare
>
> providing --verbose and later beginning of --debug (in the end) [0]
>
> were there any changes? As "openstack tripleo container image prepare
> default --output-env-file containers-prepare-parameter.yaml
> --local-push-destination" have prepared a bit different file, compared
> what was previously: NEW # namespace: docker.io/tripleou VS namespace:
> docker.io/tripleomaster # OLD
>
>
> [0] - http://paste.openstack.org/show/rBCNAQJBEe9y7CKyi9aG/
> --
> Ruslanas Gžibovskis
> +370 6030 7030
>

hrm.. seems to be there:
https://hub.docker.com/r/tripleou/centos-binary-gnocchi-metricd/tags?page=1=current-tripleo

I see current-tripleo tags

Here are some example config and logs for your reference, although what you
have seems sane to me.

https://logserver.rdoproject.org/61/749161/1/openstack-check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/f87b9d9/logs/undercloud/home/zuul/containers-prepare-parameter.yaml.txt.gz
https://logserver.rdoproject.org/61/749161/1/openstack-check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/f87b9d9/logs/undercloud/var/log/tripleo-container-image-prepare-ansible.log.txt.gz
https://logserver.rdoproject.org/61/749161/1/openstack-check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/f87b9d9/logs/undercloud/var/log/tripleo-container-image-prepare.log.txt.gz
https://647ea51a7c76ba5f17c8-d499b2e7288499505ef93c98963c2e35.ssl.cf5.rackcdn.com/748668/1/gate/tripleo-ci-centos-8-standalone/0c706e1/logs/undercloud/home/zuul/containers-prepare-parameters.yaml




> ___
> users mailing list
> users@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/users
>
> To unsubscribe: users-unsubscr...@lists.rdoproject.org
>
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users

To unsubscribe: users-unsubscr...@lists.rdoproject.org


Re: [RFC v5 4/6] usb: gadget: configfs: Check USB configuration before adding

2020-08-31 Thread Wesley Cheng



On 8/30/2020 7:29 PM, Peter Chen wrote:
> On 20-08-28 22:58:44, Wesley Cheng wrote:
>> Ensure that the USB gadget is able to support the configuration being
>> added based on the number of endpoints required from all interfaces.  This
>> is for accounting for any bandwidth or space limitations.
>>
>> Signed-off-by: Wesley Cheng 
>> ---
>>  drivers/usb/gadget/configfs.c | 22 ++
>>  1 file changed, 22 insertions(+)
>>
>> diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
>> index 56051bb97349..7c74c04b1d8c 100644
>> --- a/drivers/usb/gadget/configfs.c
>> +++ b/drivers/usb/gadget/configfs.c
>> @@ -1361,6 +1361,7 @@ static int configfs_composite_bind(struct usb_gadget 
>> *gadget,
>>  struct usb_function *f;
>>  struct usb_function *tmp;
>>  struct gadget_config_name *cn;
>> +unsigned long ep_map = 0;
>>  
>>  if (gadget_is_otg(gadget))
>>  c->descriptors = otg_desc;
>> @@ -1390,7 +1391,28 @@ static int configfs_composite_bind(struct usb_gadget 
>> *gadget,
>>  list_add(>list, >func_list);
>>  goto err_purge_funcs;
>>  }
>> +if (f->ss_descriptors) {
>> +struct usb_descriptor_header **d;
>> +
>> +d = f->ss_descriptors;
>> +for (; *d; ++d) {
>> +struct usb_endpoint_descriptor *ep;
>> +int addr;
>> +
>> +if ((*d)->bDescriptorType != 
>> USB_DT_ENDPOINT)
>> +continue;
>> +
>> +ep = (struct usb_endpoint_descriptor 
>> *)*d;
>> +addr = ((ep->bEndpointAddress & 0x80) 
>> >> 3) |
>> +(ep->bEndpointAddress & 0x0f);
> 
> ">> 3" or "<< 3?
> 

Hi Peter,

Thanks for your comments.  It should be ">> 3" as we want to utilize the
corresponding USB_DIR_IN bit in the bitmap to set the correct bit.
(USB_DIR_IN = 0x80)

>> +set_bit(addr, _map);
> 
> You want to record all endpoints on ep_map? Considering there are
> four EP_IN (1-4), and four EP_OUT (1-4), what the value of ep_map
> would like?
> 

So for example, if a configuration uses EP8IN and EP9OUT, then the
ep_map will look like:

EP8-IN:
addr = ((0x88 & 0x80) >> 3) | (0x88 & 0xf) --> 0x18

EP9-OUT:
addr = ((0x9 & 0x80) >> 3) | (0x9 & 0xf) --> 0x9

ep_map = ep_map = 0x01000200

The lower 16 bits will carry the OUT endpoints, whereas the upper 16
bits are the IN endpoints. (ie bit16 = ep0in, bit0 = ep0out)

>> +}
>> +}
>>  }
>> +ret = usb_gadget_check_config(cdev->gadget, ep_map);
>> +if (ret)
>> +goto err_purge_funcs;
>> +
> 
> You may move this patch after your 4nd patch to avoid "git bisect"
> issue.
> 

Sure, thanks for the suggestion, will do that in the next rev.

Thanks
Wesley

>>  usb_ep_autoconfig_reset(cdev->gadget);
>>  }
>>  if (cdev->use_os_string) {
>> -- 
>> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
>> a Linux Foundation Collaborative Project
>>
> 

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


[RFC v5 6/6] usb: dwc3: gadget: Ensure enough TXFIFO space for USB configuration

2020-08-29 Thread Wesley Cheng
If TXFIFO resizing is enabled, then based on if endpoint bursting is
required or not, a larger amount of FIFO space is benefical.  Sometimes
a particular interface can take all the available FIFO space, leading
to other interfaces not functioning properly.  This callback ensures that
the minimum fifo requirements, a single fifo per endpoint, can be met,
otherwise the configuration binding will fail.  This will be based on the
maximum number of eps existing in all configurations.

Signed-off-by: Wesley Cheng 
---
 drivers/usb/dwc3/core.h   |  1 +
 drivers/usb/dwc3/gadget.c | 35 +++
 2 files changed, 36 insertions(+)

diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
index e85c1ec70cc3..0559b0a82c4d 100644
--- a/drivers/usb/dwc3/core.h
+++ b/drivers/usb/dwc3/core.h
@@ -1249,6 +1249,7 @@ struct dwc3 {
u16 imod_interval;
int last_fifo_depth;
int num_ep_resized;
+   int max_cfg_eps;
 };
 
 #define INCRX_BURST_MODE 0
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 53e5220f9893..e8f7ea560920 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -2411,6 +2411,7 @@ static int dwc3_gadget_stop(struct usb_gadget *g)
 
 out:
dwc->gadget_driver  = NULL;
+   dwc->max_cfg_eps = 0;
spin_unlock_irqrestore(>lock, flags);
 
free_irq(dwc->irq_gadget, dwc->ev_buf);
@@ -2518,6 +2519,39 @@ static void dwc3_gadget_set_speed(struct usb_gadget *g,
spin_unlock_irqrestore(>lock, flags);
 }
 
+static int dwc3_gadget_check_config(struct usb_gadget *g, unsigned long ep_map)
+{
+   struct dwc3 *dwc = gadget_to_dwc(g);
+   unsigned long in_ep_map;
+   int fifo_size = 0;
+   int ram1_depth;
+   int ep_num;
+
+   if (!dwc->needs_fifo_resize)
+   return 0;
+
+   /* Only interested in the IN endpoints */
+   in_ep_map = ep_map >> 16;
+   ep_num = hweight_long(in_ep_map);
+
+   if (ep_num <= dwc->max_cfg_eps)
+   return 0;
+
+   /* Update the max number of eps in the composition */
+   dwc->max_cfg_eps = ep_num;
+
+   fifo_size = dwc3_gadget_calc_tx_fifo_size(dwc, dwc->max_cfg_eps);
+   /* Based on the equation, increment by one for every ep */
+   fifo_size += dwc->max_cfg_eps;
+
+   /* Check if we can fit a single fifo per endpoint */
+   ram1_depth = DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7);
+   if (fifo_size > ram1_depth)
+   return -ENOMEM;
+
+   return 0;
+}
+
 static const struct usb_gadget_ops dwc3_gadget_ops = {
.get_frame  = dwc3_gadget_get_frame,
.wakeup = dwc3_gadget_wakeup,
@@ -2527,6 +2561,7 @@ static const struct usb_gadget_ops dwc3_gadget_ops = {
.udc_stop   = dwc3_gadget_stop,
.udc_set_speed  = dwc3_gadget_set_speed,
.get_config_params  = dwc3_gadget_config_params,
+   .check_config   = dwc3_gadget_check_config,
 };
 
 /* -- 
*/
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[RFC v5 4/6] usb: gadget: configfs: Check USB configuration before adding

2020-08-29 Thread Wesley Cheng
Ensure that the USB gadget is able to support the configuration being
added based on the number of endpoints required from all interfaces.  This
is for accounting for any bandwidth or space limitations.

Signed-off-by: Wesley Cheng 
---
 drivers/usb/gadget/configfs.c | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
index 56051bb97349..7c74c04b1d8c 100644
--- a/drivers/usb/gadget/configfs.c
+++ b/drivers/usb/gadget/configfs.c
@@ -1361,6 +1361,7 @@ static int configfs_composite_bind(struct usb_gadget 
*gadget,
struct usb_function *f;
struct usb_function *tmp;
struct gadget_config_name *cn;
+   unsigned long ep_map = 0;
 
if (gadget_is_otg(gadget))
c->descriptors = otg_desc;
@@ -1390,7 +1391,28 @@ static int configfs_composite_bind(struct usb_gadget 
*gadget,
list_add(>list, >func_list);
goto err_purge_funcs;
}
+   if (f->ss_descriptors) {
+   struct usb_descriptor_header **d;
+
+   d = f->ss_descriptors;
+   for (; *d; ++d) {
+   struct usb_endpoint_descriptor *ep;
+   int addr;
+
+   if ((*d)->bDescriptorType != 
USB_DT_ENDPOINT)
+   continue;
+
+   ep = (struct usb_endpoint_descriptor 
*)*d;
+   addr = ((ep->bEndpointAddress & 0x80) 
>> 3) |
+   (ep->bEndpointAddress & 0x0f);
+   set_bit(addr, _map);
+   }
+   }
}
+   ret = usb_gadget_check_config(cdev->gadget, ep_map);
+   if (ret)
+   goto err_purge_funcs;
+
usb_ep_autoconfig_reset(cdev->gadget);
}
if (cdev->use_os_string) {
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[RFC v5 5/6] usb: gadget: udc: core: Introduce check_config to verify USB configuration

2020-08-28 Thread Wesley Cheng
Some UDCs may have constraints on how many high bandwidth endpoints it can
support in a certain configuration.  This API allows for the composite
driver to pass down the total number of endpoints to the UDC so it can verify
it has the required resources to support the configuration.

Signed-off-by: Wesley Cheng 
---
 drivers/usb/gadget/udc/core.c | 9 +
 include/linux/usb/gadget.h| 2 ++
 2 files changed, 11 insertions(+)

diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
index c33ad8a333ad..e006d69dff9b 100644
--- a/drivers/usb/gadget/udc/core.c
+++ b/drivers/usb/gadget/udc/core.c
@@ -1001,6 +1001,15 @@ int usb_gadget_ep_match_desc(struct usb_gadget *gadget,
 }
 EXPORT_SYMBOL_GPL(usb_gadget_ep_match_desc);
 
+int usb_gadget_check_config(struct usb_gadget *gadget, unsigned long ep_map)
+{
+   if (!gadget->ops->check_config)
+   return 0;
+
+   return gadget->ops->check_config(gadget, ep_map);
+}
+EXPORT_SYMBOL_GPL(usb_gadget_check_config);
+
 /* - */
 
 static void usb_gadget_state_work(struct work_struct *work)
diff --git a/include/linux/usb/gadget.h b/include/linux/usb/gadget.h
index 52ce1f6b8f83..791ae5b352a1 100644
--- a/include/linux/usb/gadget.h
+++ b/include/linux/usb/gadget.h
@@ -326,6 +326,7 @@ struct usb_gadget_ops {
struct usb_ep *(*match_ep)(struct usb_gadget *,
struct usb_endpoint_descriptor *,
struct usb_ss_ep_comp_descriptor *);
+   int (*check_config)(struct usb_gadget *gadget, unsigned long 
ep_map);
 };
 
 /**
@@ -575,6 +576,7 @@ int usb_gadget_connect(struct usb_gadget *gadget);
 int usb_gadget_disconnect(struct usb_gadget *gadget);
 int usb_gadget_deactivate(struct usb_gadget *gadget);
 int usb_gadget_activate(struct usb_gadget *gadget);
+int usb_gadget_check_config(struct usb_gadget *gadget, unsigned long ep_map);
 #else
 static inline int usb_gadget_frame_number(struct usb_gadget *gadget)
 { return 0; }
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[RFC v5 3/6] dt-bindings: usb: dwc3: Add entry for tx-fifo-resize

2020-08-28 Thread Wesley Cheng
Re-introduce the comment for the tx-fifo-resize setting for the DWC3
controller.  This allows for vendors to control if they require the TX FIFO
resizing logic on their HW, as the default FIFO size configurations may
already be sufficient.

Signed-off-by: Wesley Cheng 
Acked-by: Rob Herring 
---
 Documentation/devicetree/bindings/usb/dwc3.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/devicetree/bindings/usb/dwc3.txt 
b/Documentation/devicetree/bindings/usb/dwc3.txt
index d03edf9d3935..fba14d084072 100644
--- a/Documentation/devicetree/bindings/usb/dwc3.txt
+++ b/Documentation/devicetree/bindings/usb/dwc3.txt
@@ -103,7 +103,7 @@ Optional properties:
1-16 (DWC_usb31 programming guide section 1.2.3) to
enable periodic ESS TX threshold.
 
- -  tx-fifo-resize: determines if the FIFO *has* to be reallocated.
+ - tx-fifo-resize: determines if the FIFO *has* to be reallocated.
  - snps,incr-burst-type-adjustment: Value for INCR burst type of GSBUSCFG0
register, undefined length INCR burst type enable and 
INCRx type.
When just one value, which means INCRX burst mode 
enabled. When
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[RFC v5 2/6] arm64: boot: dts: qcom: sm8150: Enable dynamic TX FIFO resize logic

2020-08-28 Thread Wesley Cheng
Enable the flexible TX FIFO resize logic on SM8150.  Using a larger TX FIFO
SZ can help account for situations when system latency is greater than the
USB bus transmission latency.

Signed-off-by: Wesley Cheng 
---
 arch/arm64/boot/dts/qcom/sm8150.dtsi | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi 
b/arch/arm64/boot/dts/qcom/sm8150.dtsi
index 7a0c5b419ff0..169ac4c8e298 100644
--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
@@ -717,6 +717,7 @@ usb_1_dwc3: dwc3@a60 {
interrupts = ;
snps,dis_u2_susphy_quirk;
snps,dis_enblslpm_quirk;
+   tx-fifo-resize;
phys = <_1_hsphy>, <_1_ssphy>;
phy-names = "usb2-phy", "usb3-phy";
};
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[RFC v5 0/6] Re-introduce TX FIFO resize for larger EP bursting

2020-08-28 Thread Wesley Cheng
Changes in V5:
 - Added check_config() logic, which is used to communicate the number of EPs
   used in a particular configuration.  Based on this, the DWC3 gadget driver
   has the ability to know the maximum number of eps utilized in all configs.
   This helps reduce unnecessary allocation to unused eps, and will catch fifo
   allocation issues at bind() time.
 - Fixed variable declaration to single line per variable, and reverse xmas.
 - Created a helper for fifo clearing, which is used by ep0.c

Changes in V4:
 - Removed struct dwc3* as an argument for dwc3_gadget_resize_tx_fifos()
 - Removed WARN_ON(1) in case we run out of fifo space
 
Changes in V3:
 - Removed "Reviewed-by" tags
 - Renamed series back to RFC
 - Modified logic to ensure that fifo_size is reset if we pass the minimum
   threshold.  Tested with binding multiple FDs requesting 6 FIFOs.

Changes in V2:
 - Modified TXFIFO resizing logic to ensure that each EP is reserved a
   FIFO.
 - Removed dev_dbg() prints and fixed typos from patches
 - Added some more description on the dt-bindings commit message

Currently, there is no functionality to allow for resizing the TXFIFOs, and
relying on the HW default setting for the TXFIFO depth.  In most cases, the
HW default is probably sufficient, but for USB compositions that contain
multiple functions that require EP bursting, the default settings
might not be enough.  Also to note, the current SW will assign an EP to a
function driver w/o checking to see if the TXFIFO size for that particular
EP is large enough. (this is a problem if there are multiple HW defined
values for the TXFIFO size)

It is mentioned in the SNPS databook that a minimum of TX FIFO depth = 3
is required for an EP that supports bursting.  Otherwise, there may be
frequent occurences of bursts ending.  For high bandwidth functions,
such as data tethering (protocols that support data aggregation), mass
storage, and media transfer protocol (over FFS), the bMaxBurst value can be
large, and a bigger TXFIFO depth may prove to be beneficial in terms of USB
throughput. (which can be associated to system access latency, etc...)  It
allows for a more consistent burst of traffic, w/o any interruptions, as
data is readily available in the FIFO.

With testing done using the mass storage function driver, the results show
that with a larger TXFIFO depth, the bandwidth increased significantly.

Test Parameters:
 - Platform: Qualcomm SM8150
 - bMaxBurst = 6
 - USB req size = 256kB
 - Num of USB reqs = 16
 - USB Speed = Super-Speed
 - Function Driver: Mass Storage (w/ ramdisk)
 - Test Application: CrystalDiskMark

Results:

TXFIFO Depth = 3 max packets

Test Case | Data Size | AVG tput (in MB/s)
---
Sequential|1 GB x | 
Read  |9 loops| 193.60
  |   | 195.86
  |   | 184.77
  |   | 193.60
---

TXFIFO Depth = 6 max packets

Test Case | Data Size | AVG tput (in MB/s)
---
Sequential|1 GB x | 
Read  |9 loops| 287.35
  |   | 304.94
  |   | 289.64
  |   | 293.61
-------

Wesley Cheng (6):
  usb: dwc3: Resize TX FIFOs to meet EP bursting requirements
  arm64: boot: dts: qcom: sm8150: Enable dynamic TX FIFO resize logic
  dt-bindings: usb: dwc3: Add entry for tx-fifo-resize
  usb: gadget: configfs: Check USB configuration before adding
  usb: gadget: udc: core: Introduce check_config to verify USB
configuration
  usb: dwc3: gadget: Ensure enough TXFIFO space for USB configuration

 .../devicetree/bindings/usb/dwc3.txt  |   2 +-
 arch/arm64/boot/dts/qcom/sm8150.dtsi  |   1 +
 drivers/usb/dwc3/core.c   |   2 +
 drivers/usb/dwc3/core.h   |   7 +
 drivers/usb/dwc3/ep0.c|   2 +
 drivers/usb/dwc3/gadget.c | 194 ++
 drivers/usb/gadget/configfs.c |  22 ++
 drivers/usb/gadget/udc/core.c |   9 +
 include/linux/usb/gadget.h|   2 +
 9 files changed, 240 insertions(+), 1 deletion(-)

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[RFC v5 1/6] usb: dwc3: Resize TX FIFOs to meet EP bursting requirements

2020-08-28 Thread Wesley Cheng
Some devices have USB compositions which may require multiple endpoints
that support EP bursting.  HW defined TX FIFO sizes may not always be
sufficient for these compositions.  By utilizing flexible TX FIFO
allocation, this allows for endpoints to request the required FIFO depth to
achieve higher bandwidth.  With some higher bMaxBurst configurations, using
a larger TX FIFO size results in better TX throughput.

Ensure that one TX FIFO is reserved for every IN endpoint.  This allows for
the FIFO logic to prevent running out of FIFO space.

Signed-off-by: Wesley Cheng 
---
 drivers/usb/dwc3/core.c   |   2 +
 drivers/usb/dwc3/core.h   |   6 ++
 drivers/usb/dwc3/ep0.c|   2 +
 drivers/usb/dwc3/gadget.c | 159 ++
 4 files changed, 169 insertions(+)

diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
index 422aea24afcd..cf7815ed3861 100644
--- a/drivers/usb/dwc3/core.c
+++ b/drivers/usb/dwc3/core.c
@@ -1307,6 +1307,8 @@ static void dwc3_get_properties(struct dwc3 *dwc)
_thr_num_pkt_prd);
device_property_read_u8(dev, "snps,tx-max-burst-prd",
_max_burst_prd);
+   dwc->needs_fifo_resize = device_property_read_bool(dev,
+  "tx-fifo-resize");
 
dwc->disable_scramble_quirk = device_property_read_bool(dev,
"snps,disable_scramble_quirk");
diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
index 2f04b3e42bf1..e85c1ec70cc3 100644
--- a/drivers/usb/dwc3/core.h
+++ b/drivers/usb/dwc3/core.h
@@ -1214,6 +1214,7 @@ struct dwc3 {
unsignedis_utmi_l1_suspend:1;
unsignedis_fpga:1;
unsignedpending_events:1;
+   unsignedneeds_fifo_resize:1;
unsignedpullups_connected:1;
unsignedsetup_packet_pending:1;
unsignedthree_stage_setup:1;
@@ -1246,6 +1247,8 @@ struct dwc3 {
unsigneddis_metastability_quirk:1;
 
u16 imod_interval;
+   int last_fifo_depth;
+   int num_ep_resized;
 };
 
 #define INCRX_BURST_MODE 0
@@ -1459,6 +1462,7 @@ int dwc3_gadget_set_link_state(struct dwc3 *dwc, enum 
dwc3_link_state state);
 int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned cmd,
struct dwc3_gadget_ep_cmd_params *params);
 int dwc3_send_gadget_generic_command(struct dwc3 *dwc, unsigned cmd, u32 
param);
+void dwc3_gadget_clear_tx_fifos(struct dwc3 *dwc);
 #else
 static inline int dwc3_gadget_init(struct dwc3 *dwc)
 { return 0; }
@@ -1478,6 +1482,8 @@ static inline int dwc3_send_gadget_ep_cmd(struct dwc3_ep 
*dep, unsigned cmd,
 static inline int dwc3_send_gadget_generic_command(struct dwc3 *dwc,
int cmd, u32 param)
 { return 0; }
+static inline void dwc3_gadget_clear_tx_fifos(struct dwc3 *dwc)
+{ }
 #endif
 
 #if IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE)
diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
index 456aa87e8778..3ea1e051a8f1 100644
--- a/drivers/usb/dwc3/ep0.c
+++ b/drivers/usb/dwc3/ep0.c
@@ -611,6 +611,8 @@ static int dwc3_ep0_set_config(struct dwc3 *dwc, struct 
usb_ctrlrequest *ctrl)
return -EINVAL;
 
case USB_STATE_ADDRESS:
+   dwc3_gadget_clear_tx_fifos(dwc);
+
ret = dwc3_ep0_delegate_req(dwc, ctrl);
/* if the cfg matches and the cfg is non zero */
if (cfg && (!ret || (ret == USB_GADGET_DELAYED_STATUS))) {
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index df8d89d6bdc9..53e5220f9893 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -613,6 +613,161 @@ static int dwc3_gadget_set_ep_config(struct dwc3_ep *dep, 
unsigned int action)
 static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
bool interrupt);
 
+static int dwc3_gadget_calc_tx_fifo_size(struct dwc3 *dwc, int mult)
+{
+   int max_packet = 1024;
+   int fifo_size;
+   int mdwidth;
+
+   mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);
+   /* MDWIDTH is represented in bits, we need it in bytes */
+   mdwidth >>= 3;
+
+   fifo_size = mult * ((max_packet + mdwidth) / mdwidth) + 1;
+   return fifo_size;
+}
+
+void dwc3_gadget_clear_tx_fifos(struct dwc3 *dwc)
+{
+   struct dwc3_ep *dep;
+   int fifo_depth;
+   int size;
+   int num;
+
+   if (!dwc->needs_fifo_resize)
+   return;
+
+   /* Read ep0IN related TXFIFO size */
+   dep = dwc->eps[1];
+   size = dwc3_readl(dwc->regs, DWC3_GTXFIFOSIZ(0));
+   if (DWC3_IP_IS(DWC31))
+   fifo_depth = DWC31_GTXFIFOSIZ_TXFDEP(size);
+   else
+   fifo_depth = DWC3_GTXFIFOSIZ_TXFDEP(size);

[PATCH v2] usb: dwc3: Stop active transfers before halting the controller

2020-08-28 Thread Wesley Cheng
In the DWC3 databook, for a device initiated disconnect or bus reset, the
driver is required to send dependxfer commands for any pending transfers.
In addition, before the controller can move to the halted state, the SW
needs to acknowledge any pending events.  If the controller is not halted
properly, there is a chance the controller will continue accessing stale or
freed TRBs and buffers.

Signed-off-by: Wesley Cheng 

---
Changes in v2:
 - Moved cleanup code to the pullup() API to differentiate between device
   disconnect and hibernation.
 - Added cleanup code to the bus reset case as well.
 - Verified the move to pullup() did not reproduce the problen using the
   same test sequence.

Verified fix by adding a check for ETIMEDOUT during the run stop call.
Shell script writing to the configfs UDC file to trigger disconnect and
connect.  Batch script to have PC execute data transfers over adb (ie adb
push)  After a few iterations, we'd run into a scenario where the
controller wasn't halted.  With the following change, no failed halts after
many iterations.
---
 drivers/usb/dwc3/ep0.c|  2 +-
 drivers/usb/dwc3/gadget.c | 52 ++-
 2 files changed, 52 insertions(+), 2 deletions(-)

diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
index 59f2e8c31bd1..456aa87e8778 100644
--- a/drivers/usb/dwc3/ep0.c
+++ b/drivers/usb/dwc3/ep0.c
@@ -197,7 +197,7 @@ int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct 
usb_request *request,
int ret;
 
spin_lock_irqsave(>lock, flags);
-   if (!dep->endpoint.desc) {
+   if (!dep->endpoint.desc || !dwc->pullups_connected) {
dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
dep->name);
ret = -ESHUTDOWN;
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 3ab6f118c508..df8d89d6bdc9 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -1516,7 +1516,7 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, 
struct dwc3_request *req)
 {
struct dwc3 *dwc = dep->dwc;
 
-   if (!dep->endpoint.desc) {
+   if (!dep->endpoint.desc || !dwc->pullups_connected) {
dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
dep->name);
return -ESHUTDOWN;
@@ -1926,6 +1926,24 @@ static int dwc3_gadget_set_selfpowered(struct usb_gadget 
*g,
return 0;
 }
 
+static void dwc3_stop_active_transfers(struct dwc3 *dwc)
+{
+   u32 epnum;
+
+   for (epnum = 2; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
+   struct dwc3_ep *dep;
+
+   dep = dwc->eps[epnum];
+   if (!dep)
+   continue;
+
+   if (!(dep->flags & DWC3_EP_ENABLED))
+   continue;
+
+   dwc3_remove_requests(dwc, dep);
+   }
+}
+
 static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend)
 {
u32 reg;
@@ -1994,9 +2012,39 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int 
is_on)
}
}
 
+   /*
+* Synchronize and disable any further event handling while controller
+* is being enabled/disabled.
+*/
+   disable_irq(dwc->irq_gadget);
spin_lock_irqsave(>lock, flags);
+
+   /* Controller is not halted until pending events are acknowledged */
+   if (!is_on) {
+   u32 reg;
+
+   __dwc3_gadget_ep_disable(dwc->eps[0]);
+   __dwc3_gadget_ep_disable(dwc->eps[1]);
+
+   /*
+* The databook explicitly mentions for a device-initiated
+* disconnect sequence, the SW needs to ensure that it ends any
+* active transfers.
+*/
+   dwc3_stop_active_transfers(dwc);
+
+   reg = dwc3_readl(dwc->regs, DWC3_GEVNTCOUNT(0));
+   reg &= DWC3_GEVNTCOUNT_MASK;
+   if (reg > 0) {
+   dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), reg);
+   dwc->ev_buf->lpos = (dwc->ev_buf->lpos + reg) %
+   dwc->ev_buf->length;
+   }
+   }
+
ret = dwc3_gadget_run_stop(dwc, is_on, false);
spin_unlock_irqrestore(>lock, flags);
+   enable_irq(dwc->irq_gadget);
 
return ret;
 }
@@ -3100,6 +3148,8 @@ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
}
 
dwc3_reset_gadget(dwc);
+   /* Stop any active/pending transfers when receiving bus reset */
+   dwc3_stop_active_transfers(dwc);
 
reg = dwc3_readl(dwc->regs, DWC3_DCTL);
reg &= ~DWC3_DCTL_TSTCTRL_MASK;
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[squid-users] filter access.log

2020-08-28 Thread Wesley Mouedine Assaby

Hi,

I have the following logs :

1598547651.549 120818 192.168.100.105 TCP_TUNNEL/200 3234 CONNECT 
dmp.re:443 ericf HIER_DIRECT/213.186.33.2 -
1598547651.549 120726 192.168.100.105 TCP_TUNNEL/200 3234 CONNECT 
www.dmp.re:443 ericf HIER_DIRECT/213.186.33.2 -
1598547652.325  0 192.168.100.109 TCP_DENIED/407 3881 CONNECT 
g.live.com:443 - HIER_NONE/- text/html
1598547654.216 25 192.168.100.109 TCP_MISS/200 4973 GET 
http://192.168.100.89/nagios/cgi-bin/status.cgi? ericf 
HIER_DIRECT/192.168.100.89 text/html
1598547662.424  0 192.168.100.109 TCP_DENIED/407 3881 CONNECT 
g.live.com:443 - HIER_NONE/- text/html
1598547664.937 26 192.168.100.109 TCP_MISS/200 4978 GET 
http://192.168.100.89/nagios/cgi-bin/status.cgi? ericf 
HIER_DIRECT/192.168.100.89 text/html
1598547671.345 110538 192.168.100.116 TCP_TUNNEL/200 55246 CONNECT 
login.live.com:443 ericf HIER_DIRECT/40.90.22.187 -
1598547672.565  0 192.168.100.109 TCP_DENIED/407 4228 CONNECT 
g.live.com:443 - HIER_NONE/- text/html
1598547675.655 25 192.168.100.109 TCP_MISS/200 4974 GET 
http://192.168.100.89/nagios/cgi-bin/status.cgi? ericf 
HIER_DIRECT/192.168.100.89 text/html
1598547676.192  0 192.168.100.109 TCP_DENIED/407 3881 CONNECT 
g.live.com:443 - HIER_NONE/- text/html


Is it possible to remove log that is not authenticated (ldap) ?
I mean these lines :  *- HIER_NONE/- text/html$

Thank's!

-- Eric
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[ovirt-users] Host has no default route

2020-08-26 Thread Wesley Stewart
I am trying to add a host to a single host cluster.  Everything has gone
pretty well so far, except when the host gets to the end of the
installation process, I see an exclamation mark indicating "Host has no
default route".

The installation "finishes" and the host goes into a non-responsive state.
I am not really sure where to check.  I get a:
host *name* installation has failed. Network error during communication
with host.

Running 4.3.6

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IGCI4HBOFF53DK3BQVPDCULWXSIGEK7Y/


How to read an element inside another element in json with UpdateRecord

2020-08-25 Thread Wesley C. Dias de Oliveira
Hi, Nifi Community.

I'm trying to read an element inside another element in json with
UpdateRecord in the following json:

"data": [
["Date", "Campaign name", "Cost"],
["2020-08-25", "01.M.VL.0.GSP", 75.14576],
["2020-08-25", "11.b.da.0.search", 344.47],
["2020-08-25", "12.m.dl.0.search", 98.04],
["2020-08-25", "13.m.dl.0.search", 276.98],
["2020-08-25", "14.m.dl.0.search", 23.7],
["2020-08-25", "15.m.dl.0.search", 3.87],
["2020-08-25", "16.b.da.0.search", 4.2],
["2020-08-25", "19.m.dl.0.display", 71.452542],
["2020-08-25", "55.m.vl.1.youtube", 322.875653],
["2020-08-25", "57.m.dl.0.youtube", 124.061768],
["2020-08-25", "58.m.vl.1.youtube", 0.387847],
["2020-08-25", "59.m.vl.1.youtube", 72.637692],
["2020-08-25", "62.b.vl.1.youtube", 1.397887]
]

For example, I need to get the value '59.m.vl.1.youtube' or the date value
'2020-08-25'.

Here's my processor settings:
[image: image.png]

Can someone suggest something?

Thank you.
-- 
Grato,
Wesley C. Dias de Oliveira.

Linux User nº 576838.


Re: [google-appengine] "Google Cloud API" was not in in "Adding credentials to the project".

2020-08-22 Thread wesley chun
This is the Google App Engine mailing list. To use the PaaS system, you
only need to create a project (which you have already done) then go to
console.cloud.google.com/appengine to create your App Engine app. There's
no need to create credentials or use a "Google Cloud API" unless you know
you need to do so. I suggest installing the Google Cloud SDK (
cloud.google.com/sdk) which provides tools to simplify creating & deploying
your app. To learn more about App Engine, please see
cloud.google.com/appengine for its documentation. If you have any specific
questions, feel free to ask here but paste any specific commands or
screenshots showing where you may have an issue.

Good luck!
--Wesley

On Sat, Aug 22, 2020 at 11:53 AM I snb  wrote:

> In the console of Google cloud platform, I created project name -> Google
> cloud API -> Credentials. However, "Google Cloud API" was not an option in
> "Adding credentials to the project".


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"A computer never does what you want... only what you tell it."
wesley chun :: @wescpy <http://twitter.com/wescpy> :: Software
Architect & Engineer
Developer Advocate at Google Cloud by day; at night...
Python training & consulting : http://CyberwebConsulting.com
"Core Python" books : http://CorePython.com
Python blog: http://wescpy.blogspot.com

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAB6eaA5MBEnHCUaq_BodApwc%2BewaJRwduMUtaXLtGGMtZ%3Dg02A%40mail.gmail.com.


[rdo-dev] RDO packaging for CentOS-Stream

2020-08-21 Thread Wesley Hayutin
Greetings,

Are there any public plans for building RDO packages on CentOS-Stream
available for the community to review?

Thanks 0/
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [PATCH v8 4/4] arm64: boot: dts: qcom: pm8150b: Add DTS node for PMIC VBUS booster

2020-08-20 Thread Wesley Cheng



On 8/12/2020 2:34 AM, Sergei Shtylyov wrote:
> Hello!
> 
> On 12.08.2020 10:19, Wesley Cheng wrote:
> 
>> Add the required DTS node for the USB VBUS output regulator, which is
>> available on PM8150B.  This will provide the VBUS source to connected
>> peripherals.
>>
>> Signed-off-by: Wesley Cheng 
>> ---
>>   arch/arm64/boot/dts/qcom/pm8150b.dtsi   | 6 ++
>>   arch/arm64/boot/dts/qcom/sm8150-mtp.dts | 4 
>>   2 files changed, 10 insertions(+)
>>
>> diff --git a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
>> b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
>> index 053c659734a7..9e560c1ca30d 100644
>> --- a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
>> +++ b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
>> @@ -53,6 +53,12 @@ power-on@800 {
>>   status = "disabled";
>>   };
>>   +    pm8150b_vbus: dcdc@1100 {
> 
>    s/dcdc/regulator/? What is "dcdc", anyway?
>    The device nodes must have the generic names, according to the DT spec.
> 

Hi Sergei,

Thanks for the comment!

DCDC is the label that we use for the DC to DC converter block, since
the VBUS booster will output 5V to the connected devices.  Would it make
more sense to have "dc-dc?"

Thanks
Wesley

>> +    compatible = "qcom,pm8150b-vbus-reg";
>> +    status = "disabled";
>> +    reg = <0x1100>;
>> +    };
>> +
>>   pm8150b_typec: typec@1500 {
>>   compatible = "qcom,pm8150b-usb-typec";
>>   status = "disabled";
> [...]
> 
> MBR, Sergei

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] usb: dwc3: Stop active transfers before halting the controller

2020-08-19 Thread Wesley Cheng



On 8/19/2020 2:42 PM, Thinh Nguyen wrote:
> Hi,
> 
> Wesley Cheng wrote:
>> In the DWC3 databook, for a device initiated disconnect, the driver is
>> required to send dependxfer commands for any pending transfers.
>> In addition, before the controller can move to the halted state, the SW
>> needs to acknowledge any pending events.  If the controller is not halted
>> properly, there is a chance the controller will continue accessing stale or
>> freed TRBs and buffers.
>>
>> Signed-off-by: Wesley Cheng 
>>
>> ---
>> Verified fix by adding a check for ETIMEDOUT during the run stop call.
>> Shell script writing to the configfs UDC file to trigger disconnect and
>> connect.  Batch script to have PC execute data transfers over adb (ie adb
>> push)  After a few iterations, we'd run into a scenario where the
>> controller wasn't halted.  With the following change, no failed halts after
>> many iterations.
>> ---
>>  drivers/usb/dwc3/ep0.c|  2 +-
>>  drivers/usb/dwc3/gadget.c | 59 +--
>>  2 files changed, 57 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
>> index 59f2e8c31bd1..456aa87e8778 100644
>> --- a/drivers/usb/dwc3/ep0.c
>> +++ b/drivers/usb/dwc3/ep0.c
>> @@ -197,7 +197,7 @@ int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct 
>> usb_request *request,
>>  int ret;
>>  
>>  spin_lock_irqsave(>lock, flags);
>> -if (!dep->endpoint.desc) {
>> +if (!dep->endpoint.desc || !dwc->pullups_connected) {
>>  dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
>>  dep->name);
>>  ret = -ESHUTDOWN;
>> diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
>> index 3ab6f118c508..1f981942d7f9 100644
>> --- a/drivers/usb/dwc3/gadget.c
>> +++ b/drivers/usb/dwc3/gadget.c
>> @@ -1516,7 +1516,7 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, 
>> struct dwc3_request *req)
>>  {
>>  struct dwc3 *dwc = dep->dwc;
>>  
>> -if (!dep->endpoint.desc) {
>> +if (!dep->endpoint.desc || !dwc->pullups_connected) {
>>  dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
>>  dep->name);
>>  return -ESHUTDOWN;
>> @@ -1926,6 +1926,24 @@ static int dwc3_gadget_set_selfpowered(struct 
>> usb_gadget *g,
>>  return 0;
>>  }
>>  
>> +static void dwc3_stop_active_transfers(struct dwc3 *dwc)
>> +{
>> +u32 epnum;
>> +
>> +for (epnum = 2; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
>> +struct dwc3_ep *dep;
>> +
>> +dep = dwc->eps[epnum];
>> +if (!dep)
>> +continue;
>> +
>> +if (!(dep->flags & DWC3_EP_ENABLED))
>> +continue;
>> +
>> +dwc3_remove_requests(dwc, dep);
>> +}
>> +}
>> +
>>  static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend)
>>  {
>>  u32 reg;
>> @@ -1950,16 +1968,37 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, 
>> int is_on, int suspend)
>>  
>>  dwc->pullups_connected = true;
>>  } else {
>> +dwc->pullups_connected = false;
>> +
>> +__dwc3_gadget_ep_disable(dwc->eps[0]);
>> +__dwc3_gadget_ep_disable(dwc->eps[1]);
> 
> run_stop() function shouldn't be doing this. This is done in
> dwc3_gadget_stop(). Also, if it's device-initiated disconnect, driver
> needs to wait for control transfers to complete.
> 

Hi Thinh ,

Thanks for the feedback.

We already wait for the ep0state to move to the setup stage before
running the run stop routine, but events can still be triggered until
the controller is halted. (which is not started until we attempt to
write to the DCTL register) The reasoning will be the same as the below
comment.

>> +
>> +/*
>> + * The databook explicitly mentions for a device-initiated
>> + * disconnect sequence, the SW needs to ensure that it ends any
>> + * active transfers.
>> + */
>> +dwc3_stop_active_transfers(dwc);
> 
> It shouldn't be done here. Maybe move this to the dwc3_gadget_pullup()
> function. The run_stop() function can be called for other context beside
> this (e.g. hibernation)

Re: [PATCH] usb: dwc3: Stop active transfers before halting the controller

2020-08-19 Thread Wesley Cheng



On 8/19/2020 4:37 AM, Felipe Balbi wrote:
> 
> Hi,
> 
> Wesley Cheng  writes:
>> In the DWC3 databook, for a device initiated disconnect, the driver is
>> required to send dependxfer commands for any pending transfers.
>> In addition, before the controller can move to the halted state, the SW
>> needs to acknowledge any pending events.  If the controller is not halted
>> properly, there is a chance the controller will continue accessing stale or
>> freed TRBs and buffers.
>>
>> Signed-off-by: Wesley Cheng 
>>
>> ---
>> Verified fix by adding a check for ETIMEDOUT during the run stop call.
>> Shell script writing to the configfs UDC file to trigger disconnect and
>> connect.  Batch script to have PC execute data transfers over adb (ie adb
>> push)  After a few iterations, we'd run into a scenario where the
>> controller wasn't halted.  With the following change, no failed halts after
>> many iterations.
>> ---
>>  drivers/usb/dwc3/ep0.c|  2 +-
>>  drivers/usb/dwc3/gadget.c | 59 +--
>>  2 files changed, 57 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
>> index 59f2e8c31bd1..456aa87e8778 100644
>> --- a/drivers/usb/dwc3/ep0.c
>> +++ b/drivers/usb/dwc3/ep0.c
>> @@ -197,7 +197,7 @@ int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct 
>> usb_request *request,
>>  int ret;
>>  
>>  spin_lock_irqsave(>lock, flags);
>> -if (!dep->endpoint.desc) {
>> +if (!dep->endpoint.desc || !dwc->pullups_connected) {
> 
> these two should be the same. If pullups are not connected, there's no
> way we can have an endpoint descriptor. Did you find a race condition here?
> 

Hi Felipe,

At least for EP0, I don't see us clearing the EP0 desc after we set it
during dwc3_gadget_init_endpoint().  In the dwc3_gadget_ep_disable() we
only clear the desc for non control EPs:

static int __dwc3_gadget_ep_disable(struct dwc3_ep *dep)
{
...
/* Clear out the ep descriptors for non-ep0 */
if (dep->number > 1) {
dep->endpoint.comp_desc = NULL;
dep->endpoint.desc = NULL;
}

Is the desc for ep0 handled elsewhere? (checked ep0.c as well, but
couldn't find any references there)

>> @@ -1926,6 +1926,24 @@ static int dwc3_gadget_set_selfpowered(struct 
>> usb_gadget *g,
>>  return 0;
>>  }
>>  
>> +static void dwc3_stop_active_transfers(struct dwc3 *dwc)
>> +{
>> +u32 epnum;
>> +
>> +for (epnum = 2; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
>> +struct dwc3_ep *dep;
>> +
>> +dep = dwc->eps[epnum];
>> +if (!dep)
>> +continue;
>> +
>> +if (!(dep->flags & DWC3_EP_ENABLED))
>> +continue;
>> +
>> +dwc3_remove_requests(dwc, dep);
>> +}
>> +}
>> +
>>  static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend)
>>  {
>>  u32 reg;
>> @@ -1950,16 +1968,37 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, 
>> int is_on, int suspend)
>>  
>>  dwc->pullups_connected = true;
>>  } else {
>> +dwc->pullups_connected = false;
>> +
>> +__dwc3_gadget_ep_disable(dwc->eps[0]);
>> +__dwc3_gadget_ep_disable(dwc->eps[1]);
>> +
>> +/*
>> + * The databook explicitly mentions for a device-initiated
>> + * disconnect sequence, the SW needs to ensure that it ends any
>> + * active transfers.
>> + */
>> +dwc3_stop_active_transfers(dwc);
> 
> IIRC, gadget driver is required to dequeue transfers before
> disconnecting. My memory is a bit fuzzy in that area, but anyway, how
> did you trigger this problem?
> 

I had a script that just did the following to trigger the soft disconnect:
echo "" > /sys/kernel/config/usb_gadget/g1/UDC
sleep 4
echo "a60.dwc3" > /sys/kernel/config/usb_gadget/g1/UDC

Then on the PC, I just had a batch file executing adb push (of a large
file), in order to create the situation where there was a device
initiated disconnect while an active transfer was occurring.  After
maybe after 4-5 iterations, I saw that the controller halt failed.

[   87.364252] dwc3_gadget_run_stop run stop = 0
[   87.374168] ffs_epfile_io_complete: eshutdown
[   87.376162] __dwc3_gadget_ep_queue
[   87.386160] ffs_epfile_io_complete: eshutdown

I added some prints to hope

[ceph-users] Re: radosgw beast access logs

2020-08-19 Thread Wesley Dillingham
We would very much appreciate having this backported to nautilus.

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 


On Wed, Aug 19, 2020 at 9:02 AM Casey Bodley  wrote:

> On Tue, Aug 18, 2020 at 1:33 PM Graham Allan  wrote:
> >
> > Are there any plans to add access logs to the beast frontend, in the
> > same way we can get with civetweb? Increasing the "debug rgw" setting
> > really doesn't provide the same thing.
> >
> > Graham
> > --
> > Graham Allan - g...@umn.edu
> > Associate Director of Operations - Minnesota Supercomputing Institute
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
>
> Yes, this was implemented by Mark Kogan in
> https://github.com/ceph/ceph/pull/33083. It looks like it was
> backported to Octopus for 15.2.5 in
> https://tracker.ceph.com/issues/45951. Is there interest in a nautilus
> backport too?
>
> Casey
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[PATCH] usb: dwc3: Stop active transfers before halting the controller

2020-08-18 Thread Wesley Cheng
In the DWC3 databook, for a device initiated disconnect, the driver is
required to send dependxfer commands for any pending transfers.
In addition, before the controller can move to the halted state, the SW
needs to acknowledge any pending events.  If the controller is not halted
properly, there is a chance the controller will continue accessing stale or
freed TRBs and buffers.

Signed-off-by: Wesley Cheng 

---
Verified fix by adding a check for ETIMEDOUT during the run stop call.
Shell script writing to the configfs UDC file to trigger disconnect and
connect.  Batch script to have PC execute data transfers over adb (ie adb
push)  After a few iterations, we'd run into a scenario where the
controller wasn't halted.  With the following change, no failed halts after
many iterations.
---
 drivers/usb/dwc3/ep0.c|  2 +-
 drivers/usb/dwc3/gadget.c | 59 +--
 2 files changed, 57 insertions(+), 4 deletions(-)

diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
index 59f2e8c31bd1..456aa87e8778 100644
--- a/drivers/usb/dwc3/ep0.c
+++ b/drivers/usb/dwc3/ep0.c
@@ -197,7 +197,7 @@ int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct 
usb_request *request,
int ret;
 
spin_lock_irqsave(>lock, flags);
-   if (!dep->endpoint.desc) {
+   if (!dep->endpoint.desc || !dwc->pullups_connected) {
dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
dep->name);
ret = -ESHUTDOWN;
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 3ab6f118c508..1f981942d7f9 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -1516,7 +1516,7 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, 
struct dwc3_request *req)
 {
struct dwc3 *dwc = dep->dwc;
 
-   if (!dep->endpoint.desc) {
+   if (!dep->endpoint.desc || !dwc->pullups_connected) {
dev_err(dwc->dev, "%s: can't queue to disabled endpoint\n",
dep->name);
return -ESHUTDOWN;
@@ -1926,6 +1926,24 @@ static int dwc3_gadget_set_selfpowered(struct usb_gadget 
*g,
return 0;
 }
 
+static void dwc3_stop_active_transfers(struct dwc3 *dwc)
+{
+   u32 epnum;
+
+   for (epnum = 2; epnum < DWC3_ENDPOINTS_NUM; epnum++) {
+   struct dwc3_ep *dep;
+
+   dep = dwc->eps[epnum];
+   if (!dep)
+   continue;
+
+   if (!(dep->flags & DWC3_EP_ENABLED))
+   continue;
+
+   dwc3_remove_requests(dwc, dep);
+   }
+}
+
 static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend)
 {
u32 reg;
@@ -1950,16 +1968,37 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int 
is_on, int suspend)
 
dwc->pullups_connected = true;
} else {
+   dwc->pullups_connected = false;
+
+   __dwc3_gadget_ep_disable(dwc->eps[0]);
+   __dwc3_gadget_ep_disable(dwc->eps[1]);
+
+   /*
+* The databook explicitly mentions for a device-initiated
+* disconnect sequence, the SW needs to ensure that it ends any
+* active transfers.
+*/
+   dwc3_stop_active_transfers(dwc);
+
reg &= ~DWC3_DCTL_RUN_STOP;
 
if (dwc->has_hibernation && !suspend)
reg &= ~DWC3_DCTL_KEEP_CONNECT;
-
-   dwc->pullups_connected = false;
}
 
dwc3_gadget_dctl_write_safe(dwc, reg);
 
+   /* Controller is not halted until pending events are acknowledged */
+   if (!is_on) {
+   reg = dwc3_readl(dwc->regs, DWC3_GEVNTCOUNT(0));
+   reg &= DWC3_GEVNTCOUNT_MASK;
+   if (reg > 0) {
+   dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), reg);
+   dwc->ev_buf->lpos = (dwc->ev_buf->lpos + reg) %
+   dwc->ev_buf->length;
+   }
+   }
+
do {
reg = dwc3_readl(dwc->regs, DWC3_DSTS);
reg &= DWC3_DSTS_DEVCTRLHLT;
@@ -1994,9 +2033,15 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int 
is_on)
}
}
 
+   /*
+* Synchronize and disable any further event handling while controller
+* is being enabled/disabled.
+*/
+   disable_irq(dwc->irq_gadget);
spin_lock_irqsave(>lock, flags);
ret = dwc3_gadget_run_stop(dwc, is_on, false);
spin_unlock_irqrestore(>lock, flags);
+   enable_irq(dwc->irq_gadget);
 
return ret;
 }
@@ -3535,6 +3580,14 @@ static irqreturn_t dwc3_check_event_buf(struct 
dwc3

Re: [RFC v4 1/3] usb: dwc3: Resize TX FIFOs to meet EP bursting requirements

2020-08-18 Thread Wesley Cheng



On 8/12/2020 11:34 AM, Wesley Cheng wrote:
>>
>> awesome, thanks a lot for this :-) It's a considerable increase in your
>> setup. My only fear here is that we may end up creating a situation
>> where we can't allocate enough FIFO for all endpoints. This is, of
>> course, a consequence of the fact that we enable one endpoint at a
>> time.
>>
>> Perhaps we could envision a way where function driver requests endpoints
>> in bulk, i.e. combines all endpoint requirements into a single method
>> call for gadget framework and, consequently, for UDC.
>>
> Hi Felipe,
> 
> I agree...Resizing the txfifo is not as straightforward as it sounds :).
>  Would be interesting to see how this affects tput on other platforms as
> well.  We had a few discussions within our team, and came up with the
> logic implemented in this patch to reserve at least 1 txfifo per
> endpoint. Then we allocate any additional fifo space requests based on
> the remaining space left.  That way we could avoid over allocating, but
> the trade off is that we may have unused EPs taking up fifo space.
> 
> I didn't consider branching out to changing the gadget framework, so let
> me take a look at your suggestion to see how it turns out.
> 

Hi Felipe,

Instead of catching the out of FIFO memory issue during the ep enable
stage, I was thinking if we could do it somewhere during the bind.  Then
this would allow for at least failing the bind instead of having an
enumerated device which doesn't work. (will happen if we bail out during
ep enable phase)  The idea I had was the following:

Introduce a new USB gadget function pointer, say
usb_gadget_check_config(struct usb_gadget *gadget, unsigned long ep_map)

The purpose for the ep_map is to carry information about the endpoints
the configuration requires, since each function driver will define the
endpoint descriptor(s) it will advertise to the host.  We have access to
these ep desc after the bind() routine is executed for the function
driver, so we can update this map after every bind.  The configfs driver
will call the check config API every time a configuration is added.

static int configfs_composite_bind(struct usb_gadget *gadget,
struct usb_gadget_driver *gdriver)
{
...
  /* Go through all configs, attach all functions */
  list_for_each_entry(c, >cdev.configs, list) {
  ...
list_for_each_entry_safe(f, tmp, >func_list, list) {
...
if (f->ss_descriptors) {
  struct usb_descriptor_header **descriptors;
  descriptors = f->ss_descriptors;
  for (; *descriptors; ++descriptors) {
struct usb_endpoint_descriptor *ep;
int addr;

if ((*descriptors)->bDescriptorType != USB_DT_ENDPOINT)
continue;

ep = (struct usb_endpoint_descriptor *)*descriptors;
addr = ((ep->bEndpointAddress & 0x80) >> 3)
|  (ep->bEndpointAddress & 0x0f);
set_bit(addr, _map);
  }
}
usb_gadget_check_config(cdev->gadget, ep_map);

What it'll allow us to do is to decode the ep_map in the dwc3/udc driver
to determine if we have enough fifo space. Also, if we wanted to utilize
this ep map for the actual resizing stage, we could eliminate the issue
of not knowing how many EPs will be enabled, and allocating potentially
unused fifos due to unused eps.


Thanks
Wesley

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


[Mpi-forum] Add No-No Vote for #263

2020-08-17 Thread Wesley Bland via mpi-forum
Hi all,

I need to add a very short no-no reading/vote for this week on #263 - 
Clarification: Section 11.7 - p. 482 - MPI Exception Clarification

https://github.com/mpi-forum/mpi-issues/issues/263 


Jeff S. found a very minor mistake where I removed the word “exception" when I 
meant to remove the word “error”. You can see the change here:

https://github.com/mpi-forum/mpi-standard/pull/177/commits/892e2d2d6f19772c60cad665f1b7670e62f1fa06
 


I’ll point that out as a no-no during the errata reading this week and we’ll 
have a no-no and errata vote on this item.

Thanks,
Wes___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [google-appengine] Receiving data from Google Cloud

2020-08-17 Thread wesley chun
GCF on GCP auto-scales (as does GAE & GCR). Each instance only supports one
request at a time. Additional instances will be spun up to support other
concurrent requests. If your function (or app or container) "goes viral,"
GCP automatically handles the traffic for you. However, not everyone has
unlimited budget, so there's a way you can control the max # of instances
spun-up. To learn more about concurrent requests, check out this page in
our documentation: https://cloud.google.com/functions/docs/concepts/exec

On Sun, Aug 16, 2020 at 12:57 AM Vishnu U  wrote:

> Thanks a lot. Now I got an idea on how to proceed. Also please explain to
> me in case there are multiple requests how GCP will handle them - is there
> any load balancer? This is because the project is time-critical.
>
> On Sunday, August 16, 2020 at 1:56:44 AM UTC+5:30, Wesley C (Google) wrote:
>>
>> Ah, great. That was good to know, and yes, you can do it the easiest with
>> Cloud Functions... since you're only serving a model, you don't need an
>> entire app (App Engine). I've not done it myself, but start with this
>> post
>> <https://blog.tensorflow.org/2018/08/training-and-serving-ml-models-with-tf-keras.html>
>> on how to serve models w/tf.keras. Then follow-up with this post
>> <https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions>
>> about a year later on how it can be done with Cloud Functions.
>>
>> To quickly learn Cloud Functions, I'll refer to the advice I gave in my
>> 1st response: "You can get started immediately by going to the Cloud
>> console's Cloud Functions dashboard
>> <https://console.cloud.google.com/functions> where you can create, code,
>> deploy, and test your function, all from your browser. The Quickstart
>> sample <https://cloud.google.com/functions/docs/quickstart-python> code
>> is already dropped into the editor and ready for you to modify."
>>
>> Good luck!
>> --Wesley
>>
>>
>> On Fri, Aug 14, 2020 at 11:37 PM Vishnu U  wrote:
>>
>>> My aim is like i want to perform prediction using a Keras saved model.
>>> So I receive two sensor inputs from raspberry pi to Google Cloud and
>>> perform the prediction on the cloud and return that result back to
>>> raspberry pi to make necessary changes. Will cloud functions be suitable
>>> for this?
>>>
>>> On Saturday, August 15, 2020 at 11:59:13 AM UTC+5:30, Wesley C (Google)
>>> wrote:
>>>>
>>>> It's best if you describe your entire architecture then, because I was
>>>> only going on what you stated in your OP... you "have to send data from
>>>> Raspberry Pi to Google Cloud, process it, and receive the result back to
>>>> the same Raspberry Pi." I answered that your use case looks like it fits
>>>> Cloud Functions better than either App Engine or Cloud Run. There was no
>>>> mention of GCS (Cloud Storage) nor Cloud Datastore. As mentioned, you *can*
>>>> use App Engine to get the RaspPI data, process it, and return it, but it
>>>> seems like it would be easier to use Cloud Functions instead to do the same
>>>> thing.
>>>>
>>>> Cloud Functions can take the data, do some processing and return it,
>>>> just like App Engine, but easier. However, if the code that processes this
>>>> data is more complex and is an entire *app* where you have many components
>>>> or need to persist, provide a web UI, etc., then yes, App Engine would then
>>>> be better. Cloud Functions is for serverless unction-hosting in the cloud,
>>>> App Engine is for serverless app-hosting in the cloud, and Cloud Run is for
>>>> serverless container-hosting in the cloud.
>>>>
>>>>
>>>> On Fri, Aug 14, 2020 at 6:22 PM Vishnu U  wrote:
>>>>
>>>>>
>>>>> But my source of data is not any cloud storage or cloud data store.
>>>>> Suppose I received data on app engine from raspberry pi and I perform some
>>>>> calculation and want to return that data. Is this possible?
>>>>> On Sat, 15 Aug 2020 at 6:45 AM, wesley chun  wrote:
>>>>>
>>>>>> While App Engine will serve your needs, based on what you asked,
>>>>>> perhaps Google Cloud Functions <http://cloud.google.com/functions>
>>>>>> would be a simpler solution. You can get started immediately by going to
>>>>>> the Cloud console's Cloud Functions dashboard
>>>>>&

Re: [google-appengine] Receiving data from Google Cloud

2020-08-15 Thread wesley chun
Ah, great. That was good to know, and yes, you can do it the easiest with
Cloud Functions... since you're only serving a model, you don't need an
entire app (App Engine). I've not done it myself, but start with this post
<https://blog.tensorflow.org/2018/08/training-and-serving-ml-models-with-tf-keras.html>
on how to serve models w/tf.keras. Then follow-up with this post
<https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions>
about a year later on how it can be done with Cloud Functions.

To quickly learn Cloud Functions, I'll refer to the advice I gave in my 1st
response: "You can get started immediately by going to the Cloud console's
Cloud Functions dashboard <https://console.cloud.google.com/functions> where
you can create, code, deploy, and test your function, all from your
browser. The Quickstart sample
<https://cloud.google.com/functions/docs/quickstart-python> code is already
dropped into the editor and ready for you to modify."

Good luck!
--Wesley


On Fri, Aug 14, 2020 at 11:37 PM Vishnu U  wrote:

> My aim is like i want to perform prediction using a Keras saved model. So
> I receive two sensor inputs from raspberry pi to Google Cloud and perform
> the prediction on the cloud and return that result back to raspberry pi to
> make necessary changes. Will cloud functions be suitable for this?
>
> On Saturday, August 15, 2020 at 11:59:13 AM UTC+5:30, Wesley C (Google)
> wrote:
>>
>> It's best if you describe your entire architecture then, because I was
>> only going on what you stated in your OP... you "have to send data from
>> Raspberry Pi to Google Cloud, process it, and receive the result back to
>> the same Raspberry Pi." I answered that your use case looks like it fits
>> Cloud Functions better than either App Engine or Cloud Run. There was no
>> mention of GCS (Cloud Storage) nor Cloud Datastore. As mentioned, you *can*
>> use App Engine to get the RaspPI data, process it, and return it, but it
>> seems like it would be easier to use Cloud Functions instead to do the same
>> thing.
>>
>> Cloud Functions can take the data, do some processing and return it, just
>> like App Engine, but easier. However, if the code that processes this data
>> is more complex and is an entire *app* where you have many components or
>> need to persist, provide a web UI, etc., then yes, App Engine would then be
>> better. Cloud Functions is for serverless unction-hosting in the cloud, App
>> Engine is for serverless app-hosting in the cloud, and Cloud Run is for
>> serverless container-hosting in the cloud.
>>
>>
>> On Fri, Aug 14, 2020 at 6:22 PM Vishnu U  wrote:
>>
>>>
>>> But my source of data is not any cloud storage or cloud data store.
>>> Suppose I received data on app engine from raspberry pi and I perform some
>>> calculation and want to return that data. Is this possible?
>>> On Sat, 15 Aug 2020 at 6:45 AM, wesley chun  wrote:
>>>
>>>> While App Engine will serve your needs, based on what you asked,
>>>> perhaps Google Cloud Functions <http://cloud.google.com/functions>
>>>> would be a simpler solution. You can get started immediately by going to
>>>> the Cloud console's Cloud Functions dashboard
>>>> <https://console.cloud.google.com/functions> where you can create,
>>>> code, deploy, and test your function, all from your browser. The Quickstart
>>>> sample <https://cloud.google.com/functions/docs/quickstart-python>
>>>> code is already dropped into the editor and ready for you to modify.
>>>> Alternatively, if you already have that app you want to call in a Docker
>>>> container, then you should use Google Cloud Run
>>>> <http://cloud.google.com/run> instead. App Engine is best suited for
>>>> normal web apps like Flask or Django.
>>>>
>>>> Hope this helps!
>>>> --Wesley
>>>>
>>>> On Fri, Aug 14, 2020 at 7:28 AM Vishnu U  wrote:
>>>>
>>>>> I am working on a project where I have to send data from Raspberry Pi
>>>>> to Google Cloud, process it, and receive the result back to the same
>>>>> Raspberry Pi. Is this possible using Google App Engine?. If not is there
>>>>> any other service in GCP that can do this functionality?
>>>>>
>>>> --
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"A computer never does what you want... only what you tell it."
wesley chun :: @wescpy <http://twitter.com/wescpy> :: Softw

Re: [google-appengine] Receiving data from Google Cloud

2020-08-15 Thread wesley chun
It's best if you describe your entire architecture then, because I was only
going on what you stated in your OP... you "have to send data from
Raspberry Pi to Google Cloud, process it, and receive the result back to
the same Raspberry Pi." I answered that your use case looks like it fits
Cloud Functions better than either App Engine or Cloud Run. There was no
mention of GCS (Cloud Storage) nor Cloud Datastore. As mentioned, you *can*
use App Engine to get the RaspPI data, process it, and return it, but it
seems like it would be easier to use Cloud Functions instead to do the same
thing.

Cloud Functions can take the data, do some processing and return it, just
like App Engine, but easier. However, if the code that processes this data
is more complex and is an entire *app* where you have many components or
need to persist, provide a web UI, etc., then yes, App Engine would then be
better. Cloud Functions is for serverless unction-hosting in the cloud, App
Engine is for serverless app-hosting in the cloud, and Cloud Run is for
serverless container-hosting in the cloud.


On Fri, Aug 14, 2020 at 6:22 PM Vishnu U  wrote:

>
> But my source of data is not any cloud storage or cloud data store.
> Suppose I received data on app engine from raspberry pi and I perform some
> calculation and want to return that data. Is this possible?
> On Sat, 15 Aug 2020 at 6:45 AM, wesley chun  wrote:
>
>> While App Engine will serve your needs, based on what you asked, perhaps 
>> Google
>> Cloud Functions <http://cloud.google.com/functions> would be a simpler
>> solution. You can get started immediately by going to the Cloud
>> console's Cloud Functions dashboard
>> <https://console.cloud.google.com/functions> where you can create, code,
>> deploy, and test your function, all from your browser. The Quickstart
>> sample <https://cloud.google.com/functions/docs/quickstart-python> code
>> is already dropped into the editor and ready for you to modify.
>> Alternatively, if you already have that app you want to call in a Docker
>> container, then you should use Google Cloud Run
>> <http://cloud.google.com/run> instead. App Engine is best suited for
>> normal web apps like Flask or Django.
>>
>> Hope this helps!
>> --Wesley
>>
>> On Fri, Aug 14, 2020 at 7:28 AM Vishnu U  wrote:
>>
>>> I am working on a project where I have to send data from Raspberry Pi to
>>> Google Cloud, process it, and receive the result back to the same Raspberry
>>> Pi. Is this possible using Google App Engine?. If not is there any other
>>> service in GCP that can do this functionality?
>>>
>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAB6eaA4aDtU_Za1ZHkwO_tHnEUzDbKbKnp0ZUcxeOnvsG697NQ%40mail.gmail.com.


Re: [google-appengine] Receiving data from Google Cloud

2020-08-14 Thread wesley chun
While App Engine will serve your needs, based on what you asked, perhaps Google
Cloud Functions <http://cloud.google.com/functions> would be a simpler
solution. You can get started immediately by going to the Cloud console's
Cloud Functions dashboard <https://console.cloud.google.com/functions>
where you can create, code, deploy, and test your function, all from your
browser. The Quickstart sample
<https://cloud.google.com/functions/docs/quickstart-python> code is already
dropped into the editor and ready for you to modify. Alternatively, if you
already have that app you want to call in a Docker container, then you
should use Google Cloud Run <http://cloud.google.com/run> instead. App
Engine is best suited for normal web apps like Flask or Django.

Hope this helps!
--Wesley

On Fri, Aug 14, 2020 at 7:28 AM Vishnu U  wrote:

> I am working on a project where I have to send data from Raspberry Pi to
> Google Cloud, process it, and receive the result back to the same Raspberry
> Pi. Is this possible using Google App Engine?. If not is there any other
> service in GCP that can do this functionality?
>

-- 
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"A computer never does what you want... only what you tell it."
wesley chun :: @wescpy <http://twitter.com/wescpy> :: Software
Architect & Engineer
Developer Advocate at Google Cloud by day; at night...
Python training & consulting : http://CyberwebConsulting.com
"Core Python" books : http://CorePython.com
Python blog: http://wescpy.blogspot.com

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAB6eaA7sxv%3DgjxHXYPU3W3CcEKyorsF6X7SjMA6ny1aS3nwvog%40mail.gmail.com.


Re: [Mpi-forum] August 2020 Meeting Registration Page

2020-08-14 Thread Wesley Bland via mpi-forum
Hi all,

I just want to remind everyone one last time to register for next week’s 
virtual meeting. We have enough orgs registered to meet quorum, but there are 
still a few orgs who usually attend, but have not yet registered. Remember that 
the first voting block is on the first day so if you don’t register, you won’t 
be able to vote.

https://forms.gle/WbmoWm1iMdWS2QEX7 <https://forms.gle/WbmoWm1iMdWS2QEX7>

If you’re not sure if you’ve registered, the list of everyone who I have is 
updated on the website. I just pushed an update so it’s possible that your CDN 
might not show the most up to date version yet. There should be 36 names on the 
list.

https://www.mpi-forum.org/meetings/2020/08/attendance 
<https://www.mpi-forum.org/meetings/2020/08/attendance>

For voting during the meeting, remember that we will vote entirely online via a 
JotForms page. I will email out a link to everyone who has registered (and is 
eligible to vote) when the voting block opens. The script that tallies the 
votes will only accept the first vote from each org so make sure that you talk 
to the other people you work with to determine who will submit the vote. Voting 
closes 30 minutes after each block opens (though voting as soon as the block 
opens is appreciated to allow me to tally the votes faster). If you have any 
questions, feel free to email me or send me a Slack message.

Thanks,
Wes

> On Aug 4, 2020, at 3:54 PM, Wesley Bland  wrote:
> 
> Hi all,
> 
> Registration for the August 2020 Meeting is now open. Details are on the 
> logistics page on the website, but the link you need to register is here:
> 
> https://forms.gle/WbmoWm1iMdWS2QEX7 <https://forms.gle/WbmoWm1iMdWS2QEX7>
> 
> As always, you must be registered in order to attend and vote. In particular, 
> voting eligibility is cut off after the first voting block, which is 
> scheduled for 11:30am Central US time on Monday, August 17. That essentially 
> means that there will be no late registration allowed.
> 
> If you have any questions or concerns, please let me know.
> 
> Thanks,
> Wes

___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [RFC v4 1/3] usb: dwc3: Resize TX FIFOs to meet EP bursting requirements

2020-08-12 Thread Wesley Cheng



On 8/11/2020 12:12 AM, Felipe Balbi wrote:
> 
> Hi,
> 
> Wesley Cheng  writes:
>> On 8/10/2020 5:27 AM, Felipe Balbi wrote:
>>> Wesley Cheng  writes:
>>>
>>> Hi,
>>>
>>>> Some devices have USB compositions which may require multiple endpoints
>>>> that support EP bursting.  HW defined TX FIFO sizes may not always be
>>>> sufficient for these compositions.  By utilizing flexible TX FIFO
>>>> allocation, this allows for endpoints to request the required FIFO depth to
>>>> achieve higher bandwidth.  With some higher bMaxBurst configurations, using
>>>> a larger TX FIFO size results in better TX throughput.
>>>
>>> how much better? What's the impact? Got some real numbers of this
>>> running with upstream kernel? I guess mass storage gadget is the
>>> simplest one to test.
>>>
>> Hi Felipe,
>>
>> Thanks for the input.
>>
>> Sorry for not including the numbers in the patch itself, but I did
>> mention the set of mass storage tests I ran w/ the upstream kernel on
>> SM8150 in the cover letter.  Let me just share that here:
>>
>> Test Parameters:
>>  - Platform: Qualcomm SM8150
>>  - bMaxBurst = 6
>>  - USB req size = 256kB
>>  - Num of USB reqs = 16
>>  - USB Speed = Super-Speed
>>  - Function Driver: Mass Storage (w/ ramdisk)
>>  - Test Application: CrystalDiskMark
>>
>> Results:
>>
>> TXFIFO Depth = 3 max packets
>>
>> Test Case | Data Size | AVG tput (in MB/s)
>> ---
>> Sequential|1 GB x |
>> Read  |9 loops| 193.60
>>|   | 195.86
>>   |   | 184.77
>>   |   | 193.60
>> ---
>>
>> TXFIFO Depth = 6 max packets
>>
>> Test Case | Data Size | AVG tput (in MB/s)
>> ---
>> Sequential|1 GB x |
>> Read  |9 loops| 287.35
>>|   | 304.94
>>   |   | 289.64
>>   |   | 293.61
>> ---
> 
> awesome, thanks a lot for this :-) It's a considerable increase in your
> setup. My only fear here is that we may end up creating a situation
> where we can't allocate enough FIFO for all endpoints. This is, of
> course, a consequence of the fact that we enable one endpoint at a
> time.
> 
> Perhaps we could envision a way where function driver requests endpoints
> in bulk, i.e. combines all endpoint requirements into a single method
> call for gadget framework and, consequently, for UDC.
> 
Hi Felipe,

I agree...Resizing the txfifo is not as straightforward as it sounds :).
 Would be interesting to see how this affects tput on other platforms as
well.  We had a few discussions within our team, and came up with the
logic implemented in this patch to reserve at least 1 txfifo per
endpoint. Then we allocate any additional fifo space requests based on
the remaining space left.  That way we could avoid over allocating, but
the trade off is that we may have unused EPs taking up fifo space.

I didn't consider branching out to changing the gadget framework, so let
me take a look at your suggestion to see how it turns out.

>>>> +  if (!dwc->needs_fifo_resize)
>>>> +  return 0;
>>>> +
>>>> +  /* resize IN endpoints except ep0 */
>>>> +  if (!usb_endpoint_dir_in(dep->endpoint.desc) || dep->number <= 1)
>>>> +  return 0;
>>>> +
>>>> +  /* Don't resize already resized IN endpoint */
>>>> +  if (dep->fifo_depth)
>>>
>>> using fifo_depth as a flag seems flakey to me. What happens when someone
>>> in the future changes the behavior below and this doesn't apply anymore?
>>>
>>> Also, why is this procedure called more than once for the same endpoint?
>>> Does that really happen?
>>>
>> I guess it can be considered a bug elsewhere (ie usb gadget or function
>> driver) if this happens twice.  Plus, if we decide to keep this in the
>> dwc3 enable endpoint path, the DWC3_EP_ENABLED flag will ensure it's
>> called only once as well.  Its probably overkill to check fifo_depth here.
> 
> We could add a dev_WARN_ONCE() just to catch any possible bugs elsewhere.
> 

OK, I can add that.

>>>> +  if (remaining < fifo_size) {
>>>> +  if (remaining > 0)
>>>> +  fifo_size = remaining;
>>>> +  else
>>>> +  fifo_s

[ceph-users] Meaning of the "tag" key in bucket metadata

2020-08-12 Thread Wesley Dillingham
Recently we encountered an instance of bucket corruption of two varieties.
One in which the bucket metadata was missing and another in which the
bucket.instance metadata was missing for various buckets.

We have seemingly been successful in restoring the metadata by
reconstructing it from the remaining pieces of metadata and injecting it
using "radosgw-admin metadata put" for both bucket and bucket.instance
metadata

One piece of information that we could not determine was "tag"

"ver": {
"tag": "_iIMyX8XLf0HSTciEsrgLA7j",
"ver": 1
},

and so we tried reusing the tag in bucket.instance for the bucket metadata
and also spoofed the tag value to something random. It appears in both
situations the bucket's functionality was restored. I am however uncertain
of the function of this "tag" key and what situation I may be exposing
myself to by reusing or spoofing its value.

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[PATCH v8 4/4] arm64: boot: dts: qcom: pm8150b: Add DTS node for PMIC VBUS booster

2020-08-12 Thread Wesley Cheng
Add the required DTS node for the USB VBUS output regulator, which is
available on PM8150B.  This will provide the VBUS source to connected
peripherals.

Signed-off-by: Wesley Cheng 
---
 arch/arm64/boot/dts/qcom/pm8150b.dtsi   | 6 ++
 arch/arm64/boot/dts/qcom/sm8150-mtp.dts | 4 
 2 files changed, 10 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/pm8150b.dtsi 
b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
index 053c659734a7..9e560c1ca30d 100644
--- a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
+++ b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
@@ -53,6 +53,12 @@ power-on@800 {
status = "disabled";
};
 
+   pm8150b_vbus: dcdc@1100 {
+   compatible = "qcom,pm8150b-vbus-reg";
+   status = "disabled";
+   reg = <0x1100>;
+   };
+
pm8150b_typec: typec@1500 {
compatible = "qcom,pm8150b-usb-typec";
status = "disabled";
diff --git a/arch/arm64/boot/dts/qcom/sm8150-mtp.dts 
b/arch/arm64/boot/dts/qcom/sm8150-mtp.dts
index 6c6325c3af59..ba3b5b802954 100644
--- a/arch/arm64/boot/dts/qcom/sm8150-mtp.dts
+++ b/arch/arm64/boot/dts/qcom/sm8150-mtp.dts
@@ -409,6 +409,10 @@ _mem_phy {
vdda-pll-max-microamp = <19000>;
 };
 
+_vbus {
+   status = "okay";
+};
+
 _1_hsphy {
status = "okay";
vdda-pll-supply = <_usb_hs_core>;
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[PATCH v8 3/4] arm64: boot: dts: qcom: pm8150b: Add node for USB type C block

2020-08-12 Thread Wesley Cheng
The PM8150B has a dedicated USB type C block, which can be used for type C
orientation and role detection.  Create the reference node to this type C
block for further use.

Signed-off-by: Wesley Cheng 
---
 arch/arm64/boot/dts/qcom/pm8150b.dtsi | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/pm8150b.dtsi 
b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
index e112e8876db6..053c659734a7 100644
--- a/arch/arm64/boot/dts/qcom/pm8150b.dtsi
+++ b/arch/arm64/boot/dts/qcom/pm8150b.dtsi
@@ -53,6 +53,13 @@ power-on@800 {
status = "disabled";
};
 
+   pm8150b_typec: typec@1500 {
+   compatible = "qcom,pm8150b-usb-typec";
+   status = "disabled";
+   reg = <0x1500>;
+   interrupts = <0x2 0x15 0x5 IRQ_TYPE_EDGE_RISING>;
+   };
+
pm8150b_temp: temp-alarm@2400 {
compatible = "qcom,spmi-temp-alarm";
reg = <0x2400>;
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[PATCH v8 0/4] Introduce PMIC based USB type C detection

2020-08-12 Thread Wesley Cheng
Changes in v8:
 - Simplified some property definitions, and corrected the
   connector reference in the dt binding.

Changes in v7:
 - Fixups in qcom-pmic-typec.c to remove uncesscary includes, printk formatting,
   and revising some logic operations. 

Changes in v6:
 - Removed qcom_usb_vbus-regulator.c and qcom,usb-vbus-regulator.yaml from the
   series as they have been merged on regulator.git
 - Added separate references to the usb-connector.yaml in qcom,pmic-typec.yaml
   instead of referencing the entire schema.

Changes in v5:
 - Fix dt_binding_check warning/error in qcom,pmic-typec.yaml

Changes in v4:
 - Modified qcom,pmic-typec binding to include the SS mux and the DRD remote
   endpoint nodes underneath port@1, which is assigned to the SSUSB path
   according to usb-connector
 - Added usb-connector reference to the typec dt-binding
 - Added tags to the usb type c and vbus nodes
 - Removed "qcom" tags from type c and vbus nodes
 - Modified Kconfig module name, and removed module alias from the typec driver
 
Changes in v3:
 - Fix driver reference to match driver name in Kconfig for
   qcom_usb_vbus-regulator.c
 - Utilize regulator bitmap helpers for enable, disable and is enabled calls in
   qcom_usb_vbus-regulator.c
 - Use of_get_regulator_init_data() to initialize regulator init data, and to
   set constraints in qcom_usb_vbus-regulator.c
 - Remove the need for a local device structure in the vbus regulator driver
 
Changes in v2:
 - Use devm_kzalloc() in qcom_pmic_typec_probe()
 - Add checks to make sure return value of typec_find_port_power_role() is
   valid
 - Added a VBUS output regulator driver, which will be used by the PMIC USB
   type c driver to enable/disable the source
 - Added logic to control vbus source from the PMIC type c driver when
   UFP/DFP is detected
 - Added dt-binding for this new regulator driver
 - Fixed Kconfig typec notation to match others
 - Leave type C block disabled until enabled by a platform DTS

Add the required drivers for implementing type C orientation and role
detection using the Qualcomm PMIC.  Currently, PMICs such as the PM8150B
have an integrated type C block, which can be utilized for this.  This
series adds the dt-binding, PMIC type C driver, and DTS nodes.

The PMIC type C driver will register itself as a type C port w/ a
registered type C switch for orientation, and will fetch a USB role switch
handle for the role notifications.  It will also have the ability to enable
the VBUS output to any connected devices based on if the device is behaving
as a UFP or DFP.

Wesley Cheng (4):
  usb: typec: Add QCOM PMIC typec detection driver
  dt-bindings: usb: Add Qualcomm PMIC type C controller dt-binding
  arm64: boot: dts: qcom: pm8150b: Add node for USB type C block
  arm64: boot: dts: qcom: pm8150b: Add DTS node for PMIC VBUS booster

 .../bindings/usb/qcom,pmic-typec.yaml | 112 
 arch/arm64/boot/dts/qcom/pm8150b.dtsi |  13 +
 arch/arm64/boot/dts/qcom/sm8150-mtp.dts   |   4 +
 drivers/usb/typec/Kconfig |  12 +
 drivers/usb/typec/Makefile|   1 +
 drivers/usb/typec/qcom-pmic-typec.c   | 271 ++
 6 files changed, 413 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml
 create mode 100644 drivers/usb/typec/qcom-pmic-typec.c

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[PATCH v8 1/4] usb: typec: Add QCOM PMIC typec detection driver

2020-08-12 Thread Wesley Cheng
The QCOM SPMI typec driver handles the role and orientation detection, and
notifies client drivers using the USB role switch framework.   It registers
as a typec port, so orientation can be communicated using the typec switch
APIs.  The driver also attains a handle to the VBUS output regulator, so it
can enable/disable the VBUS source when acting as a host/device.

Signed-off-by: Wesley Cheng 
Acked-by: Heikki Krogerus 
Reviewed-by: Stephen Boyd 
---
 drivers/usb/typec/Kconfig   |  12 ++
 drivers/usb/typec/Makefile  |   1 +
 drivers/usb/typec/qcom-pmic-typec.c | 271 
 3 files changed, 284 insertions(+)
 create mode 100644 drivers/usb/typec/qcom-pmic-typec.c

diff --git a/drivers/usb/typec/Kconfig b/drivers/usb/typec/Kconfig
index 559dd06117e7..63789cf88fce 100644
--- a/drivers/usb/typec/Kconfig
+++ b/drivers/usb/typec/Kconfig
@@ -73,6 +73,18 @@ config TYPEC_TPS6598X
  If you choose to build this driver as a dynamically linked module, the
  module will be called tps6598x.ko.
 
+config TYPEC_QCOM_PMIC
+   tristate "Qualcomm PMIC USB Type-C driver"
+   depends on ARCH_QCOM || COMPILE_TEST
+   help
+ Driver for supporting role switch over the Qualcomm PMIC.  This will
+ handle the USB Type-C role and orientation detection reported by the
+ QCOM PMIC if the PMIC has the capability to handle USB Type-C
+ detection.
+
+ It will also enable the VBUS output to connected devices when a
+ DFP connection is made.
+
 source "drivers/usb/typec/mux/Kconfig"
 
 source "drivers/usb/typec/altmodes/Kconfig"
diff --git a/drivers/usb/typec/Makefile b/drivers/usb/typec/Makefile
index 7753a5c3cd46..cceffd987643 100644
--- a/drivers/usb/typec/Makefile
+++ b/drivers/usb/typec/Makefile
@@ -6,4 +6,5 @@ obj-$(CONFIG_TYPEC_TCPM)+= tcpm/
 obj-$(CONFIG_TYPEC_UCSI)   += ucsi/
 obj-$(CONFIG_TYPEC_HD3SS3220)  += hd3ss3220.o
 obj-$(CONFIG_TYPEC_TPS6598X)   += tps6598x.o
+obj-$(CONFIG_TYPEC_QCOM_PMIC)  += qcom-pmic-typec.o
 obj-$(CONFIG_TYPEC)+= mux/
diff --git a/drivers/usb/typec/qcom-pmic-typec.c 
b/drivers/usb/typec/qcom-pmic-typec.c
new file mode 100644
index ..20b2b6502cb3
--- /dev/null
+++ b/drivers/usb/typec/qcom-pmic-typec.c
@@ -0,0 +1,271 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define TYPEC_MISC_STATUS  0xb
+#define CC_ATTACHEDBIT(0)
+#define CC_ORIENTATION BIT(1)
+#define SNK_SRC_MODE   BIT(6)
+#define TYPEC_MODE_CFG 0x44
+#define TYPEC_DISABLE_CMD  BIT(0)
+#define EN_SNK_ONLYBIT(1)
+#define EN_SRC_ONLYBIT(2)
+#define TYPEC_VCONN_CONTROL0x46
+#define VCONN_EN_SRC   BIT(0)
+#define VCONN_EN_VAL   BIT(1)
+#define TYPEC_EXIT_STATE_CFG   0x50
+#define SEL_SRC_UPPER_REF  BIT(2)
+#define TYPEC_INTR_EN_CFG_10x5e
+#define TYPEC_INTR_EN_CFG_1_MASK   GENMASK(7, 0)
+
+struct qcom_pmic_typec {
+   struct device   *dev;
+   struct fwnode_handle*fwnode;
+   struct regmap   *regmap;
+   u32 base;
+
+   struct typec_capability *cap;
+   struct typec_port   *port;
+   struct usb_role_switch *role_sw;
+
+   struct regulator*vbus_reg;
+   boolvbus_enabled;
+};
+
+static void qcom_pmic_typec_enable_vbus_regulator(struct qcom_pmic_typec
+   *qcom_usb, bool enable)
+{
+   int ret;
+
+   if (enable == qcom_usb->vbus_enabled)
+   return;
+
+   if (!qcom_usb->vbus_reg) {
+   qcom_usb->vbus_reg = devm_regulator_get(qcom_usb->dev,
+   "usb_vbus");
+   if (IS_ERR(qcom_usb->vbus_reg)) {
+   qcom_usb->vbus_reg = NULL;
+   return;
+   }
+   }
+
+   if (enable) {
+   ret = regulator_enable(qcom_usb->vbus_reg);
+   if (ret)
+   return;
+   } else {
+   ret = regulator_disable(qcom_usb->vbus_reg);
+   if (ret)
+   return;
+   }
+   qcom_usb->vbus_enabled = enable;
+}
+
+static void qcom_pmic_typec_check_connection(struct qcom_pmic_typec *qcom_usb)
+{
+   enum typec_orientation orientation;
+   enum usb_role role;
+   unsigned int stat;
+   bool enable_vbus;
+
+   regmap_read(qcom_usb->regmap, qcom_usb->base + TYPEC_MISC_STATUS,
+   );
+
+   if (s

[PATCH v8 2/4] dt-bindings: usb: Add Qualcomm PMIC type C controller dt-binding

2020-08-12 Thread Wesley Cheng
Introduce the dt-binding for enabling USB type C orientation and role
detection using the PM8150B.  The driver will be responsible for receiving
the interrupt at a state change on the CC lines, reading the
orientation/role, and communicating this information to the remote
clients, which can include a role switch node and a type C switch.

Signed-off-by: Wesley Cheng 
---
 .../bindings/usb/qcom,pmic-typec.yaml | 112 ++
 1 file changed, 112 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml

diff --git a/Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml 
b/Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml
new file mode 100644
index ..d5173f88d429
--- /dev/null
+++ b/Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml
@@ -0,0 +1,112 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/usb/qcom,pmic-typec.yaml#;
+$schema: "http://devicetree.org/meta-schemas/core.yaml#;
+
+title: Qualcomm PMIC based USB type C Detection Driver
+
+maintainers:
+  - Wesley Cheng 
+
+description: |
+  Qualcomm PMIC Type C Detect
+
+properties:
+  compatible:
+enum:
+  - qcom,pm8150b-usb-typec
+
+  reg:
+maxItems: 1
+description: Type C base address
+
+  interrupts:
+maxItems: 1
+description: CC change interrupt from PMIC
+
+  connector:
+description: Connector type for remote endpoints
+type: object
+
+properties:
+  compatible:
+$ref: /connector/usb-connector.yaml#/properties/compatible
+enum:
+  - usb-c-connector
+
+  power-role: true
+  data-role: true
+
+  ports:
+description: Remote endpoint connections
+type: object
+$ref: /connector/usb-connector.yaml#/properties/ports
+
+properties:
+  port@0:
+description: Remote endpoints for the High Speed path
+type: object
+
+  port@1:
+description: Remote endpoints for the Super Speed path
+type: object
+
+properties:
+  endpoint@0:
+description: Connection to USB type C mux node
+type: object
+
+  endpoint@1:
+description: Connection to role switch node
+type: object
+
+required:
+  - compatible
+
+required:
+  - compatible
+  - interrupts
+  - connector
+
+additionalProperties: false
+
+examples:
+  - |
+#include 
+pm8150b {
+#address-cells = <1>;
+#size-cells = <0>;
+pm8150b_typec: typec@1500 {
+compatible = "qcom,pm8150b-usb-typec";
+reg = <0x1500>;
+interrupts = <0x2 0x15 0x5 IRQ_TYPE_EDGE_RISING>;
+
+connector {
+compatible = "usb-c-connector";
+power-role = "dual";
+data-role = "dual";
+ports {
+#address-cells = <1>;
+#size-cells = <0>;
+port@0 {
+reg = <0>;
+};
+port@1 {
+reg = <1>;
+#address-cells = <1>;
+#size-cells = <0>;
+usb3_data_ss: endpoint@0 {
+reg = <0>;
+remote-endpoint = <_ss_mux>;
+};
+usb3_role: endpoint@1 {
+reg = <1>;
+remote-endpoint = <_drd_switch>;
+};
+};
+};
+};
+};
+};
+...
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



Re: [RFC v4 1/3] usb: dwc3: Resize TX FIFOs to meet EP bursting requirements

2020-08-12 Thread Wesley Cheng



On 8/11/2020 7:22 PM, Peter Chen wrote:
> On Wed, Jun 24, 2020 at 10:31 AM Wesley Cheng  wrote:
>>
>> Some devices have USB compositions which may require multiple endpoints
>> that support EP bursting.  HW defined TX FIFO sizes may not always be
>> sufficient for these compositions.  By utilizing flexible TX FIFO
>> allocation, this allows for endpoints to request the required FIFO depth to
>> achieve higher bandwidth.  With some higher bMaxBurst configurations, using
>> a larger TX FIFO size results in better TX throughput.
>>
>> Ensure that one TX FIFO is reserved for every IN endpoint.  This allows for
>> the FIFO logic to prevent running out of FIFO space.
>>
> 
> You may do this for only allocated endpoints, but you need override
> default .match_ep
> API. See cdns3/gadget.c and cdns3/ep0.c as an example.
> 
> Peter
> 

Hi Peter,

Thank you for your input.  I've actually considered doing some
matching/resizing in the .match_ep route as well, but it doesn't work
well for situations where multiple configurations are in play. The
reason being that if you look at the epautoconf APIs, the configfs
driver will use the usb_ep_autoconfig_reset() to reset the endpoints
claimed between initialization of each configuration.  This means that
the epautoconf driver expects to re-use the usb_endpoints:

static int configfs_composite_bind(struct usb_gadget *gadget,
struct usb_gadget_driver *gdriver)
{
...

/* Go through all configs, attach all functions */
list_for_each_entry(c, >cdev.configs, list) {
...
list_for_each_entry_safe(f, tmp, >func_list, list) {
list_del(>list);
ret = usb_add_function(c, f);
if (ret) {
list_add(>list, >func_list);
goto err_purge_funcs;
}
}
usb_ep_autoconfig_reset(cdev->gadget);
}

So in this situation, I wouldn't want the dwc3 gadget driver to assign a
different dwc3 ep for endpoints in each configuration, when we know that
only one set of EPs will be active when the host chooses.  I hope I
understood your feedback correctly, and definitely appreciate the input!

Thanks
Wesley

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [RFC v4 1/3] usb: dwc3: Resize TX FIFOs to meet EP bursting requirements

2020-08-10 Thread Wesley Cheng



On 8/10/2020 5:27 AM, Felipe Balbi wrote:
> Wesley Cheng  writes:
> 
> Hi,
> 
>> Some devices have USB compositions which may require multiple endpoints
>> that support EP bursting.  HW defined TX FIFO sizes may not always be
>> sufficient for these compositions.  By utilizing flexible TX FIFO
>> allocation, this allows for endpoints to request the required FIFO depth to
>> achieve higher bandwidth.  With some higher bMaxBurst configurations, using
>> a larger TX FIFO size results in better TX throughput.
> 
> how much better? What's the impact? Got some real numbers of this
> running with upstream kernel? I guess mass storage gadget is the
> simplest one to test.
> 
Hi Felipe,

Thanks for the input.

Sorry for not including the numbers in the patch itself, but I did
mention the set of mass storage tests I ran w/ the upstream kernel on
SM8150 in the cover letter.  Let me just share that here:

Test Parameters:
 - Platform: Qualcomm SM8150
 - bMaxBurst = 6
 - USB req size = 256kB
 - Num of USB reqs = 16
 - USB Speed = Super-Speed
 - Function Driver: Mass Storage (w/ ramdisk)
 - Test Application: CrystalDiskMark

Results:

TXFIFO Depth = 3 max packets

Test Case | Data Size | AVG tput (in MB/s)
---
Sequential|1 GB x |
Read  |9 loops| 193.60
  |   | 195.86
  |   | 184.77
  |   | 193.60
---

TXFIFO Depth = 6 max packets

Test Case | Data Size | AVG tput (in MB/s)
---
Sequential|1 GB x |
Read  |9 loops| 287.35
  |   | 304.94
  |   | 289.64
  |   | 293.61
---

>> diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
>> index 6dee4dabc0a4..76db9b530861 100644
>> --- a/drivers/usb/dwc3/ep0.c
>> +++ b/drivers/usb/dwc3/ep0.c
>> @@ -601,8 +601,9 @@ static int dwc3_ep0_set_config(struct dwc3 *dwc, struct 
>> usb_ctrlrequest *ctrl)
>>  {
>>  enum usb_device_state state = dwc->gadget.state;
>>  u32 cfg;
>> -int ret;
>> +int ret, num, size;
> 
> no, no. Please one declaration per line.
> 
Got it.
>>  u32 reg;
>> +struct dwc3_ep *dep;
> 
> Keep reverse xmas tree order.
> 
Understood.
>> @@ -611,6 +612,40 @@ static int dwc3_ep0_set_config(struct dwc3 *dwc, struct 
>> usb_ctrlrequest *ctrl)
>>  return -EINVAL;
>>  
>>  case USB_STATE_ADDRESS:
>> +/*
>> + * If tx-fifo-resize flag is not set for the controller, then
>> + * do not clear existing allocated TXFIFO since we do not
>> + * allocate it again in dwc3_gadget_resize_tx_fifos
>> + */
>> +if (dwc->needs_fifo_resize) {
>> +/* Read ep0IN related TXFIFO size */
>> +dep = dwc->eps[1];
>> +size = dwc3_readl(dwc->regs, DWC3_GTXFIFOSIZ(0));
>> +if (dwc3_is_usb31(dwc))
>> +dep->fifo_depth = DWC31_GTXFIFOSIZ_TXFDEP(size);
>> +else
>> +dep->fifo_depth = DWC3_GTXFIFOSIZ_TXFDEP(size);
>> +
>> +dwc->last_fifo_depth = dep->fifo_depth;
>> +/* Clear existing TXFIFO for all IN eps except ep0 */
>> +for (num = 3; num < min_t(int, dwc->num_eps,
>> +DWC3_ENDPOINTS_NUM); num += 2) {
>> +dep = dwc->eps[num];
>> +/* Don't change TXFRAMNUM on usb31 version */
>> +size = dwc3_is_usb31(dwc) ?
>> +dwc3_readl(dwc->regs,
>> +   DWC3_GTXFIFOSIZ(num >> 1)) &
>> +   DWC31_GTXFIFOSIZ_TXFRAMNUM :
>> +   0;
>> +
>> +dwc3_writel(dwc->regs,
>> +DWC3_GTXFIFOSIZ(num >> 1),
>> +size);
>> +dep->fifo_depth = 0;
>> +}
>> +dwc->num_ep_resized = 0;
> 
> care to move this into a helper that you call unconditionally and the
> helper returns early is !needs_fifo_resize?
> 
Sure, I'll include that in the next revision.
>> diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
>> index 00746c284

Re: Hopefully a simple question

2020-08-10 Thread Wesley Shields
Well, assuming you put the rules in c:\Temp\yarfile.yar, no. If you didn't put 
that file there or can't explain why it's there, then it is a positive match 
you need to investigate.

-- WXS

> On Aug 10, 2020, at 9:12 PM, Michael Fry  wrote:
> 
> So does that mean it is a positive for something being detected?
> 
> On Tuesday, 11 August 2020 10:41:48 UTC+10, Wesley Shields wrote:
> The format is  .
> 
> In your case, YARA matched two rules on the file c:\Temp\yarfile.yar
> 
> -- WXS
> 
>> On Aug 10, 2020, at 8:33 PM, Michael Fry > wrote:
>> 
>> Hi All,
>> 
>> So I have recently been asked to use Yara to scan some servers for some IOCs 
>> and I am using the command line version.
>> 
>> The yar file was provided to me.
>> 
>> I am struggling to find anything anywhere that outlines interpretting the 
>> log file. For example, if I have the below, is this indicating a type of 
>> scan using a particular yar file? Or is it indicating that it has found 
>> something?
>> 
>> webshell_embedded_jscript_evaluator c:\\Temp\yarfile.yar
>> webshell_jscript_eval c:\\Temp\yarfile.yar
>> 
>> Thanks
>> Michael
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "YARA" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to yara-p...@googlegroups.com <>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/yara-project/fca76a39-121e-476d-a597-9f4d3ea18cado%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/yara-project/fca76a39-121e-476d-a597-9f4d3ea18cado%40googlegroups.com?utm_medium=email_source=footer>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "YARA" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to yara-project+unsubscr...@googlegroups.com 
> <mailto:yara-project+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/yara-project/348a4407-a2b3-4d18-853d-2f7da33827dco%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/yara-project/348a4407-a2b3-4d18-853d-2f7da33827dco%40googlegroups.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"YARA" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to yara-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/yara-project/D0021161-59A1-4BDD-A7A6-F60105164DAD%40atarininja.org.


Re: Hopefully a simple question

2020-08-10 Thread Wesley Shields
The format is  .

In your case, YARA matched two rules on the file c:\Temp\yarfile.yar

-- WXS

> On Aug 10, 2020, at 8:33 PM, Michael Fry  wrote:
> 
> Hi All,
> 
> So I have recently been asked to use Yara to scan some servers for some IOCs 
> and I am using the command line version.
> 
> The yar file was provided to me.
> 
> I am struggling to find anything anywhere that outlines interpretting the log 
> file. For example, if I have the below, is this indicating a type of scan 
> using a particular yar file? Or is it indicating that it has found something?
> 
> webshell_embedded_jscript_evaluator c:\\Temp\yarfile.yar
> webshell_jscript_eval c:\\Temp\yarfile.yar
> 
> Thanks
> Michael
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "YARA" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to yara-project+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/yara-project/fca76a39-121e-476d-a597-9f4d3ea18cado%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"YARA" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to yara-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/yara-project/F9A47C08-C594-4FE0-AF6C-1375A23CB926%40atarininja.org.


Re: unsubscribe

2020-08-10 Thread Wesley Peng
Please send an empty message to: user-unsubscr...@ignite.apache.org to 
unsubscribe yourself from the list.



Sijo Mathew wrote:




Re: [PATCH 3/3] usb: dwc3: dwc3-qcom: Find USB connector and register role switch

2020-08-10 Thread Wesley Cheng



On 8/10/2020 5:13 AM, Felipe Balbi wrote:
> 
> Hi,
> 
> Wesley Cheng  writes:
>> @@ -190,6 +195,73 @@ static int dwc3_qcom_register_extcon(struct dwc3_qcom 
>> *qcom)
>>  return 0;
>>  }
>>  
>> +static int dwc3_qcom_usb_role_switch_set(struct usb_role_switch *sw,
>> + enum usb_role role)
>> +{
>> +struct dwc3_qcom *qcom = usb_role_switch_get_drvdata(sw);
>> +struct fwnode_handle *child;
>> +bool enable = false;
>> +
>> +if (!qcom->dwc3_drd_sw) {
>> +child = device_get_next_child_node(qcom->dev, NULL);
>> +if (child) {
>> +qcom->dwc3_drd_sw = 
>> usb_role_switch_find_by_fwnode(child);
>> +fwnode_handle_put(child);
>> +if (IS_ERR(qcom->dwc3_drd_sw)) {
>> +qcom->dwc3_drd_sw = NULL;
>> +return 0;
>> +}
>> +}
>> +}
>> +
>> +usb_role_switch_set_role(qcom->dwc3_drd_sw, role);
> 
> why is this done at the glue layer instead of core.c?
> 
Hi Felipe,

Thanks for the feedback.  So the DWC3 DRD driver already registers a
role switch device for receiving external events.  However, the DWC3
glue (dwc3-qcom) needs to also know of the role changes, so that it can
set the override bits accordingly in the controller.  I've seen a few
implementations, ie using a notifier block to notify the glue of these
events, but that placed a dependency on the DWC3 core being available to
the DWC3 glue at probe time.  If the DWC3 core was not available at that
time, the dwc3-qcom driver will finish its probing routine, and since
the notifier was never registered, the role change events would not be
received.

By registering another role switch device in the DWC3 glue, this gives
us a place to attempt initializing a channel w/ the DWC3 core if it
wasn't ready during probe().  For example...

usb_conn_detect_cable(role=USB_ROLE_DEVICE)
-->usb_role_switch_set_role(sw=dwc3-qcom)
  -->dwc3_qcom_usb_role_switch_set()
-- IF DWC3 core role switch available
-->usb_role_switch_set_role(sw=drd)
-- ELSE
--> do nothing.

So basically, the goal is to just propagate the role change event down
to the DWC3 core, while breaking the dependency of it being available at
probe.
>> +if (role == USB_ROLE_DEVICE)
>> +enable = true;
>> +else
>> +enable = false;
>> +
>> +qcom->mode = (role == USB_ROLE_HOST) ? USB_DR_MODE_HOST :
>> +   USB_DR_MODE_PERIPHERAL;
>> +dwc3_qcom_vbus_overrride_enable(qcom, enable);
> 
> could you add a patch fixing this typo?
> 
Sure, I'll submit a separate patch to remove that extra 'r'

Thanks
Wesley

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: run ignitevisorcmd in k8s and docker?

2020-08-07 Thread Wesley Peng

Hi

bbweb wrote:
we are meeting problem when we run ignitevisorcmd in K8S and docker 
environment. After we start a cluster in K8S and run ignitevisorcmd in 
the node, it just can't find any node when running "top" in it, it 
just show empty topology.


Do you have any error log?
Are you sure ignite cluster is started up correctly?

regards.


Re: Call for presentations for ApacheCon North America 2020 now open

2020-08-05 Thread Wesley Peng
Congrats. We could prepare a talking for "machine learning application 
with Ignite", as we did store feature engineering data into ignite for 
large-scale and fast access.


regards.


Saikat Maitra wrote:

Congrats!!!

It looks like both of our talks are on same day, Tuesday, September 29th

https://apachecon.com/acah2020/tracks/ignite.html


[issue41491] plistlib can't load macOS BigSur system LaunchAgent

2020-08-05 Thread Wesley Whetstone


Change by Wesley Whetstone :


--
components: +macOS
nosy: +ned.deily, ronaldoussoren

___
Python tracker 
<https://bugs.python.org/issue41491>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41491] plistlib can't load macOS BigSur system LaunchAgent

2020-08-05 Thread Wesley Whetstone


New submission from Wesley Whetstone :

When attempting to load the new LaunchAgent at 
`/System/Library/LaunchAgents/com.apple.cvmsCompAgent3600_arm64.plist` plistlib 
Raises a ValueError of 

  File "/opt/salt/lib/python3.7/plistlib.py", line 272, in handle_end_element
handler()
  File "/opt/salt/lib/python3.7/plistlib.py", line 332, in end_integer
self.add_object(int(self.get_data()))
ValueError: invalid literal for int() with base 10: '0x010c'

on


0x010c


Technically this violates the spec at 
http://www.apple.com/DTDs/PropertyList-1.0.dtd. Figured it was worth reporting.

Full Plist is attached.

--
files: com.apple.cvmsCompAgent_arm64.plist
messages: 374908
nosy: jckwhet
priority: normal
severity: normal
status: open
title: plistlib can't load macOS BigSur system LaunchAgent
Added file: 
https://bugs.python.org/file49371/com.apple.cvmsCompAgent_arm64.plist

___
Python tracker 
<https://bugs.python.org/issue41491>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Question about deployment of math computing [EXT]

2020-08-05 Thread Wesley Peng

James,

James Smith wrote:

The services which use apache/mod_perl work reliably and return data for these 
- the dancer/starman sometimes fail/hang as there are no backends to serve the 
requests or those backends timeout requests to the nginx/proxy (but still 
continue using resources). The team running the backends fail to notice this - 
because there is no easy to see reporting etc on these boxes.


Thanks for letting me know this.
We have been using starman for restful api service, they are light 
weight http request/response.
But for (machine learning)/(deep learning) serving stuff, we may 
consider to use modperl for more stability.


regards.


Re: Question about deployment of math computing

2020-08-04 Thread Wesley Peng
Thank you David. That makes thing clear. I have made mistake to think 
starman was event driven, who is really preforked.


I think any preforked server could serve our depolyment better.

Regards.


dc...@prosentient.com.au wrote:

Hi Wesley,

I don't know all the ins and outs of Starman. I do know that Starman is a 
preforking web server, which uses Net::Server::PreFork under the hood. You 
configure the number of preforked workers to correspond with your CPU and 
memory limits for that server.

As per the Starman documentation 
(https://metacpan.org/pod/release/MIYAGAWA/Starman-0.4015/lib/Starman.pm), you 
should put a frontend server/reverse proxy (like Nginx) in front of Starman. 
Nginx is often recommended because it's event-driven. The idea being that a few 
Nginx workers (rather than those thousands of Apache processes you mentioned) 
can handle a very large volume of HTTP requests, and then Nginx intelligently 
passes those requests to the backend server (e.g. Starman).

Of course, no matter what, you can still get timeouts if the backend server 
isn't responding fast enough, but typically the backend process is going as 
fast as it can. At that point, your only option is to scale up. You can do that 
by using Nginx as a load balancer and horizontally scaling your Starman 
instances, or you can put more CPUs on that machine, and configure Starman to 
prefork more workers.

Let's say you use Mod_Perl/Apache instead of Starman/Nginx. At the end of the 
day, you still need to think about how many concurrent requests you're needing 
to serve and how many CPUs you have available. If you've configured Apache to 
have too many processes, you're going to overwhelm your server with tasks. You 
need to use reasonable constraints.

But remember that this isn't specific to Perl/Starman/Nginx/Apache/mod_perl. 
These are concepts that translate to any stack regardless of programming 
language and web server. (Of course, languages like Node.js and Golang have 
some very cool features for dealing with blocking I/O, so that you can make the 
most of the resources you have. That being said, Perl has Mojo/Mojolicious, 
which claims to do non-blocking I/O in Perl. I have yet to try it though. I'm 
skeptical, but could give it a try.)

At the end of the day, it depends on the workload that you're trying to cater 
to.

David Cook

-Original Message-
From: Wesley Peng 
Sent: Wednesday, 5 August 2020 1:31 PM
To: dc...@prosentient.com.au; modperl@perl.apache.org
Subject: Re: Question about deployment of math computing

Hi

dc...@prosentient.com.au wrote:

That's interesting. After re-reading your earlier email, I think that I 
misunderstood what you were saying.

Since this is a mod_perl listserv, I imagine that the advice will always be to 
use mod_perl rather than starman?

Personally, I'd say either option would be fine. In my experience, the key 
advantage of mod_perl or starman (say over CGI) is that you can pre-load 
libraries into memory at web server startup time, and that processes are 
persistent (although they do have limited lifetimes of course).

You could use a framework like Catalyst or Mojolicious (note Dancer is another 
framework, but I haven't worked with it) which can support different web 
servers, and then try the different options to see what suits you best.

One thing to note would be that usually people put a reverse proxy in front of 
starman like Apache or Nginx (partially for serving static assets but other 
reasons as well). Your stack could be less complicated if you just went the 
mod_perl/Apache route.

That said, what OS are you planning to use? It's worth checking if mod_perl is 
easily available in your target OS's package repositories. I think Red Hat 
dropped mod_perl starting with RHEL 8, although EPEL 8 now has mod_perl in it. 
Something to think about.


We use ubuntu 16.04 and 18.04.

We do use dancer/starman in product env, but the service only handle light 
weight API requests, for example, a restful api for data validation.

While our math computing is heavy weight service, each request will take a lot 
time to finish, so I think should it be deployed in dancer?

Since the webserver behind dancer is starman by default, starman is event 
driven, it uses very few processes ,and the process can't scale up/down 
automatically.

We deploy starman with 5 processes by default. when 5 requests coming, all 5 
starman processes are so busy to compute them, so the next request will be 
blocked. is it?

But apache mp is working as prefork way, generally it can have as many as 
thousands of processes if the resource is permitted. And the process management 
can scale up/down the children automatically.

So my real question is, for a CPU consuming service, the event driven service 
like starman, has no advantage than preforked service like Apache.

Am I right?

Thanks.



Re: Question about deployment of math computing

2020-08-04 Thread Wesley Peng

Hi

dc...@prosentient.com.au wrote:

That's interesting. After re-reading your earlier email, I think that I 
misunderstood what you were saying.

Since this is a mod_perl listserv, I imagine that the advice will always be to 
use mod_perl rather than starman?

Personally, I'd say either option would be fine. In my experience, the key 
advantage of mod_perl or starman (say over CGI) is that you can pre-load 
libraries into memory at web server startup time, and that processes are 
persistent (although they do have limited lifetimes of course).

You could use a framework like Catalyst or Mojolicious (note Dancer is another 
framework, but I haven't worked with it) which can support different web 
servers, and then try the different options to see what suits you best.

One thing to note would be that usually people put a reverse proxy in front of 
starman like Apache or Nginx (partially for serving static assets but other 
reasons as well). Your stack could be less complicated if you just went the 
mod_perl/Apache route.

That said, what OS are you planning to use? It's worth checking if mod_perl is 
easily available in your target OS's package repositories. I think Red Hat 
dropped mod_perl starting with RHEL 8, although EPEL 8 now has mod_perl in it. 
Something to think about.


We use ubuntu 16.04 and 18.04.

We do use dancer/starman in product env, but the service only handle 
light weight API requests, for example, a restful api for data validation.


While our math computing is heavy weight service, each request will take 
a lot time to finish, so I think should it be deployed in dancer?


Since the webserver behind dancer is starman by default, starman is 
event driven, it uses very few processes ,and the process can't scale 
up/down automatically.


We deploy starman with 5 processes by default. when 5 requests coming, 
all 5 starman processes are so busy to compute them, so the next request 
will be blocked. is it?


But apache mp is working as prefork way, generally it can have as many 
as thousands of processes if the resource is permitted. And the process 
management can scale up/down the children automatically.


So my real question is, for a CPU consuming service, the event driven 
service like starman, has no advantage than preforked service like Apache.


Am I right?

Thanks.


Re: Question about deployment of math computing

2020-08-04 Thread Wesley Peng

Hi

dc...@prosentient.com.au wrote:

If your app isn't human-facing, then I don't see why a little delay would be a 
problem?


Our app is not human facing. The application by other department will 
request the result from our app via HTTP.


The company has huge big-data stack deployed, such as 
Hadoop/Flink/Storm/Spark etc, all these solutions have been existing 
there. The data traffic each day is as huge as xx PB.


But, those stacks have complicated privileges control layer, they are 
most time running as backend service, for example, offline analysis, 
feature engineering, and some real time streaming.


We train the modes in backend, and use the stacks mentioned above.

But, once the mode finished training, they will be pushed to online as 
prediction service and serve as HTTP API, b/c third party apps will only 
like to request the interface via HTTP way.


Thanks.


Re: Question about deployment of math computing

2020-08-04 Thread Wesley Peng

Hi

Mithun Bhattacharya wrote:

Do you really need a webserver which is providing a blocking service ?


yes, this is a prediction server, which would be deployed in PROD 
environment, the client application would request the prediction server 
for results as scores. You can think it as online recommendation systems.


regards.


Question about deployment of math computing

2020-08-04 Thread Wesley Peng

Hi

We do math programming (so called machine learning today) in webserver.
The response would be slow, generally it will take 100ms~500ms to finish 
a request.
For this use case, shall we deploy the code within preforked modperl ,or 
event-driven server like dancer/starman?
(we don't use DB like mysql or other slow IO storage server, all 
arguments were passed to webserver by HTTP POST from client).


Thank you.


Re: suggestions for perl as web development language [EXT]

2020-08-04 Thread Wesley Peng




jbiskofski wrote:

Excelent, stable, FAST, production ready HTTP server: Starman


yes starman is good. we use it for rest-api service, the app code is 
dancer, whose backend server is starman.


if we need more concurrent handlers, a simple front-proxy like nginx 
would be deployed.


regards.


Re: suggestions for perl as web development language [EXT]

2020-08-04 Thread Wesley Peng

Hi

Mark Blackman wrote:

mod_perl’s relative efficiency can be achieved by other well-known means.


for example?

thank you.


Re: suggestions for perl as web development language [EXT]

2020-08-04 Thread Wesley Peng




Joseph He wrote:
My company uses Perl for web development. It handles real time payment 
transactions without any problem. Good software is made by the people 
not by the language.


Maybe I am weak on this point, but how perl handle types more strictly?
for example,

123 + '456'

this is permitted in perl, and this is dangerous. It is not possible in 
other strong types language.


Thanks.


[ceph-users] Apparent bucket corruption error: get_bucket_instance_from_oid failed

2020-08-04 Thread Wesley Dillingham
Long running cluster, currently running 14.2.6

I have a certain user whose buckets have become corrupted in that the
following commands:

radosgw-admin bucket check --bucket 
radosgw-admin bucket list --bucket= 

return with the following:
ERROR: could not init bucket: (2) No such file or directory
2020-08-04 13:47:03.417 7f94dfea86c0 -1 ERROR: get_bucket_instance_from_oid
failed: -2

radosgw-admin metadata get bucket:
is successful.

radosgw-admin metadata get bucket.instance::
yields: ERROR: can't get key: (2) No such file or directory

radosgw-admin metadata list bucket.instance | grep -i 
yields no results.

When I drop to rados and look in the index pool I can see 128 objects
matching the bucket_id as derived from the "metadata get" and this seems to
match other functioning buckets.

Unfortunately this issue was sleepy and happened many months ago unnoticed.
We have not retained many of the ceph logs from this time. We do have the
civetweb access logs and have found that error codes began on the same day
that we lowered the pg_num on many of the rgw pools (all of them but the
index_pool and the data_pool). OSDs were filestore at that time and have
since been converted to bluestore. Other than the dates lining up we have
no direct evidence these are related, and did not encounter any
inconsistent PGs. We also used this process on other clusters with no ill
effects.

Ideally I would like to repair and restore the functionality of these
buckets given that it appears the objects in the index pool still exist. Is
there any way to repair these? Do these errors correlate to any known
issues? Thanks in advance for any leads.


Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[Mpi-forum] August 2020 Meeting Registration Page

2020-08-04 Thread Wesley Bland via mpi-forum
Hi all,

Registration for the August 2020 Meeting is now open. Details are on the 
logistics page on the website, but the link you need to register is here:

https://forms.gle/WbmoWm1iMdWS2QEX7 

As always, you must be registered in order to attend and vote. In particular, 
voting eligibility is cut off after the first voting block, which is scheduled 
for 11:30am Central US time on Monday, August 17. That essentially means that 
there will be no late registration allowed.

If you have any questions or concerns, please let me know.

Thanks,
Wes___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


suggestions for perl as web development language

2020-08-03 Thread Wesley Peng

greetings,

My team use all of perl, ruby, python for scripting stuff.
perl is stronger for system admin tasks, and data analysis etc.
But for web development, it seems to be not as popular as others.
It has less selective frameworks, and even we can't get the right people 
to do the webdev job with perl.
Do you think in today we will give up perl/modperl as web development 
language, and choose the alternatives instead?


Thanks & Regards



[PATCH v7 1/4] usb: typec: Add QCOM PMIC typec detection driver

2020-08-03 Thread Wesley Cheng
The QCOM SPMI typec driver handles the role and orientation detection, and
notifies client drivers using the USB role switch framework.   It registers
as a typec port, so orientation can be communicated using the typec switch
APIs.  The driver also attains a handle to the VBUS output regulator, so it
can enable/disable the VBUS source when acting as a host/device.

Signed-off-by: Wesley Cheng 
Acked-by: Heikki Krogerus 
---
 drivers/usb/typec/Kconfig   |  12 ++
 drivers/usb/typec/Makefile  |   1 +
 drivers/usb/typec/qcom-pmic-typec.c | 271 
 3 files changed, 284 insertions(+)
 create mode 100644 drivers/usb/typec/qcom-pmic-typec.c

diff --git a/drivers/usb/typec/Kconfig b/drivers/usb/typec/Kconfig
index 559dd06117e7..63789cf88fce 100644
--- a/drivers/usb/typec/Kconfig
+++ b/drivers/usb/typec/Kconfig
@@ -73,6 +73,18 @@ config TYPEC_TPS6598X
  If you choose to build this driver as a dynamically linked module, the
  module will be called tps6598x.ko.
 
+config TYPEC_QCOM_PMIC
+   tristate "Qualcomm PMIC USB Type-C driver"
+   depends on ARCH_QCOM || COMPILE_TEST
+   help
+ Driver for supporting role switch over the Qualcomm PMIC.  This will
+ handle the USB Type-C role and orientation detection reported by the
+ QCOM PMIC if the PMIC has the capability to handle USB Type-C
+ detection.
+
+ It will also enable the VBUS output to connected devices when a
+ DFP connection is made.
+
 source "drivers/usb/typec/mux/Kconfig"
 
 source "drivers/usb/typec/altmodes/Kconfig"
diff --git a/drivers/usb/typec/Makefile b/drivers/usb/typec/Makefile
index 7753a5c3cd46..cceffd987643 100644
--- a/drivers/usb/typec/Makefile
+++ b/drivers/usb/typec/Makefile
@@ -6,4 +6,5 @@ obj-$(CONFIG_TYPEC_TCPM)+= tcpm/
 obj-$(CONFIG_TYPEC_UCSI)   += ucsi/
 obj-$(CONFIG_TYPEC_HD3SS3220)  += hd3ss3220.o
 obj-$(CONFIG_TYPEC_TPS6598X)   += tps6598x.o
+obj-$(CONFIG_TYPEC_QCOM_PMIC)  += qcom-pmic-typec.o
 obj-$(CONFIG_TYPEC)+= mux/
diff --git a/drivers/usb/typec/qcom-pmic-typec.c 
b/drivers/usb/typec/qcom-pmic-typec.c
new file mode 100644
index ..20b2b6502cb3
--- /dev/null
+++ b/drivers/usb/typec/qcom-pmic-typec.c
@@ -0,0 +1,271 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define TYPEC_MISC_STATUS  0xb
+#define CC_ATTACHEDBIT(0)
+#define CC_ORIENTATION BIT(1)
+#define SNK_SRC_MODE   BIT(6)
+#define TYPEC_MODE_CFG 0x44
+#define TYPEC_DISABLE_CMD  BIT(0)
+#define EN_SNK_ONLYBIT(1)
+#define EN_SRC_ONLYBIT(2)
+#define TYPEC_VCONN_CONTROL0x46
+#define VCONN_EN_SRC   BIT(0)
+#define VCONN_EN_VAL   BIT(1)
+#define TYPEC_EXIT_STATE_CFG   0x50
+#define SEL_SRC_UPPER_REF  BIT(2)
+#define TYPEC_INTR_EN_CFG_10x5e
+#define TYPEC_INTR_EN_CFG_1_MASK   GENMASK(7, 0)
+
+struct qcom_pmic_typec {
+   struct device   *dev;
+   struct fwnode_handle*fwnode;
+   struct regmap   *regmap;
+   u32 base;
+
+   struct typec_capability *cap;
+   struct typec_port   *port;
+   struct usb_role_switch *role_sw;
+
+   struct regulator*vbus_reg;
+   boolvbus_enabled;
+};
+
+static void qcom_pmic_typec_enable_vbus_regulator(struct qcom_pmic_typec
+   *qcom_usb, bool enable)
+{
+   int ret;
+
+   if (enable == qcom_usb->vbus_enabled)
+   return;
+
+   if (!qcom_usb->vbus_reg) {
+   qcom_usb->vbus_reg = devm_regulator_get(qcom_usb->dev,
+   "usb_vbus");
+   if (IS_ERR(qcom_usb->vbus_reg)) {
+   qcom_usb->vbus_reg = NULL;
+   return;
+   }
+   }
+
+   if (enable) {
+   ret = regulator_enable(qcom_usb->vbus_reg);
+   if (ret)
+   return;
+   } else {
+   ret = regulator_disable(qcom_usb->vbus_reg);
+   if (ret)
+   return;
+   }
+   qcom_usb->vbus_enabled = enable;
+}
+
+static void qcom_pmic_typec_check_connection(struct qcom_pmic_typec *qcom_usb)
+{
+   enum typec_orientation orientation;
+   enum usb_role role;
+   unsigned int stat;
+   bool enable_vbus;
+
+   regmap_read(qcom_usb->regmap, qcom_usb->base + TYPEC_MISC_STATUS,
+   );
+
+   if (stat & CC_ATTACHED) {
+   orientation = (stat 

[PATCH v7 2/4] dt-bindings: usb: Add Qualcomm PMIC type C controller dt-binding

2020-08-03 Thread Wesley Cheng
Introduce the dt-binding for enabling USB type C orientation and role
detection using the PM8150B.  The driver will be responsible for receiving
the interrupt at a state change on the CC lines, reading the
orientation/role, and communicating this information to the remote
clients, which can include a role switch node and a type C switch.

Signed-off-by: Wesley Cheng 
---
 .../bindings/usb/qcom,pmic-typec.yaml | 131 ++
 1 file changed, 131 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml

diff --git a/Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml 
b/Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml
new file mode 100644
index ..877e979f413f
--- /dev/null
+++ b/Documentation/devicetree/bindings/usb/qcom,pmic-typec.yaml
@@ -0,0 +1,131 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/usb/qcom,pmic-typec.yaml#;
+$schema: "http://devicetree.org/meta-schemas/core.yaml#;
+
+title: Qualcomm PMIC based USB type C Detection Driver
+
+maintainers:
+  - Wesley Cheng 
+
+description: |
+  Qualcomm PMIC Type C Detect
+
+properties:
+  compatible:
+enum:
+  - qcom,pm8150b-usb-typec
+
+  reg:
+maxItems: 1
+description: Type C base address
+
+  interrupts:
+maxItems: 1
+description: CC change interrupt from PMIC
+
+  connector:
+description: Connector type for remote endpoints
+type: object
+
+properties:
+  compatible:
+$ref: /schemas/connector/usb-connector.yaml#/properties/compatible
+enum:
+  - usb-c-connector
+
+  power-role:
+$ref: /schemas/connector/usb-connector.yaml#/properties/power-role
+enum:
+ - dual
+ - source
+ - sink
+
+  data-role:
+$ref: /schemas/connector/usb-connector.yaml#/properties/data-role
+enum:
+  - dual
+  - host
+  - device
+
+  ports:
+description: Remote endpoint connections
+type: object
+$ref: /schemas/connector/usb-connector.yaml#/properties/ports
+
+properties:
+  port@0:
+description: Remote endpoints for the High Speed path
+type: object
+
+  port@1:
+description: Remote endpoints for the Super Speed path
+type: object
+
+properties:
+  endpoint@0:
+description: Connection to USB type C mux node
+type: object
+
+properties:
+  remote-endpoint:
+description: Node reference to the type C mux
+
+  endpoint@1:
+description: Connection to role switch node
+type: object
+
+properties:
+  remote-endpoint:
+description: Node reference to the role switch node
+
+required:
+  - compatible
+
+required:
+  - compatible
+  - interrupts
+  - connector
+
+additionalProperties: false
+
+examples:
+  - |
+#include 
+pm8150b {
+#address-cells = <1>;
+#size-cells = <0>;
+pm8150b_typec: typec@1500 {
+compatible = "qcom,pm8150b-usb-typec";
+reg = <0x1500>;
+interrupts = <0x2 0x15 0x5 IRQ_TYPE_EDGE_RISING>;
+
+connector {
+compatible = "usb-c-connector";
+power-role = "dual";
+data-role = "dual";
+ports {
+#address-cells = <1>;
+#size-cells = <0>;
+port@0 {
+reg = <0>;
+};
+port@1 {
+reg = <1>;
+#address-cells = <1>;
+#size-cells = <0>;
+usb3_data_ss: endpoint@0 {
+reg = <0>;
+remote-endpoint = <_ss_mux>;
+};
+usb3_role: endpoint@1 {
+reg = <1>;
+remote-endpoint = <_drd_switch>;
+};
+};
+};
+};
+};
+};
+...
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



<    3   4   5   6   7   8   9   10   11   12   >