How to decide which method to used - shared folder or database?
On Jun 23, 2017 4:21 PM, "Anthony" wrote:
> Either store sessions in the database or use stick sessions. See
> http://web2py.com/books/default/chapter/29/13/
> deployment-recipes#Efficiency-and-scalability.
>
>
Either store sessions in the database or use stick sessions. See
http://web2py.com/books/default/chapter/29/13/deployment-recipes#Efficiency-and-scalability.
Anthony
On Friday, June 23, 2017 at 4:12:37 PM UTC-4, briann...@gmail.com wrote:
>
> I'm planning to have 2 servers hosting web2py for
First of all I refer you to niphlod which is very comprehensive. Let me add
something in relation to your last questions.
When you run web2py with rocket, rocket uses threads. When you run web2py
with nginx+uwsgi, uwsgi runs in emperor mode and it uses multiple processes
(not threads) and you
I saw a relatively/mostly related issue on the repo and so I feel I should
give an answer...
web2py is a web framework: we're not in the business of managing processes
nor webservers. We're in the business of realizing an app that does what
you want in response to a web request.
For
On Wednesday, September 21, 2016 at 6:37:21 AM UTC-7, Pierre wrote:
>
> Hi,
>
> I am very familiar with the concepts of threads, processes and scaling
> issues. What i understand so far is:
> (1) a web2py application process manages a number of threads dealing with
> requests
> (2) I can run
yes, i manage a (seemingly to me) large application. 30-40 request per
second average sustained 24 hours a day. that app is the data access API
for an iOS app plus an accompanying website. some thoughts:
- we use google app engine. on the up side it serves all my requests, on
the downside
Did you try 2.0.6? Does everything work well for you? Do you notice any
performance improvements?
On Monday, 3 September 2012 08:07:09 UTC-5, howesc wrote:
yes, i manage a (seemingly to me) large application. 30-40 request per
second average sustained 24 hours a day. that app is the data
Great!! The first set of questions ... some are maybe too private, you see
...
API for iOS
### does it mean that you dont use views, just endering data to JSON, and
data representation is done in iOS app?
30-40 request per second average sustained 24 hours a day.
### have you tried how high
On 3 Sep 2012, at 7:42 AM, David Marko dma...@tiscali.cz wrote:
Great!! The first set of questions ... some are maybe too private, you see ...
API for iOS
### does it mean that you dont use views, just endering data to JSON, and
data representation is done in iOS app?
30-40 request per
about the iOS api - we have RESTful requests that are responded to in
json and we have a couple of views rendered as HTML. much more JSON though.
GAE just turns on more instances as there are more requests. they will
happily run up your bill up to the maximum daily limit that you set. i
I think it'd be nice to have session created explicitly inside a WITH
statement block so that users create sessions only when they want. But I
don't know if it's feasible because I suspect sessions are created
automatically and passed into the exec environment, and it appears sessions
are
I promise you that 2.1.0 (in one or two weeks) will adress the first issue
as I have from Anthony that allow you to specify which models to run.
About the second issue. Sessions are only locked per user not globally in
order to prevent inconsistent session states. You can
I think it'd be nice to have session created explicitly inside a WITH
statement block so that users create sessions only when they want. But I
don't know if it's feasible because I suspect sessions are created
automatically and passed into the exec environment, and it appears sessions
So... in trunk we have an experimental solution to the session issue
proposed by Anthony. We allow disabling of session logic using routes.py.
routes_in = [('/welcome','/welcome',dict(web2py_disable_session=False)]
This solution many or many not stay in this form. We are testing it and
is the sky blue, really really blue ? those kind of questions really makes
me wonder.
What is high volume of traffic ? 1k concurrent requests ? 10GB/hour of
traffic ? many small requests? less requests but requesting a large chunk
of data? The data returned to the users will be the result of
Hi all, i'm also curious on this. Can howesc share his experience ... Or others
? We are planing project for estimated 1mil views per working hours (in 12
hours in day). I know that there are many aspects but generaly would be
encouraging to hear real life data with architecture info. How many
Thanks for the quick reply . I apologize for not being clear . I do not
have a specific web app or use case in my mind . I just wanted to know
whether it can be used for large-scale projects (yes . I understand that
now) . However i could not interpret some points of your reply .
1. Database -
1. DAL is not a database. My point was: if you can't get your mongodb
instance to return 1m requests per second (using nothing but mongoitself),
surely web2py can't help with that.
2. never said that. Every app needs different requirements. That being
said, there is at least one instance of
1. Apart from building tables so that joins are minimum and building
indexes .. what are other optimisations to make that possible ?
2 . I understand that some tuning should be done . could you be more
specific about the type of tuning that should be done ? what should be
changed exactly ? can
This is a discussion with no ends if you don't provide a complete app.
1. read your preferred manual/book/guide on how to tune your db engine
(whatever engine will you choose). Find out if it performs better with more
RAM, or faster disks, or more CPU, on what OS, etc. and plan your resources
Just another last question .
1. Is it advantageous to use Database Abstraction layer provided by web2py
or use the vendor specific db module ? (I know it depends on the type of
db,but the general rule)
On Sunday, September 2, 2012 3:57:46 AM UTC+5:30, Niphlod wrote:
This is a discussion with
There are many things you can do for efficiency and scalability that are
not specific to web2py. I'll leave you to research those topics, but here
are some items specific to web2py (many of these are mentioned in the
Efficiency Tricks section of the Deployment
There are different issues:
1) speed
2) scalability with number of users
3) scalability with data size
4) scalability with complexity of queries
1) Anthony has posted lots of advice in this direction. Out of the box
web2py is not the fastest framework because it does more. For example you
Web2py can be used to build fairly effortlessly either complex applications
with *small* numbers of concurrent users (*small* here is relatively and
objective) or simple applications with *large* numbers of concurrent users.
But if you want to build complex applications with large numbers of
Second, session handling. Sessions are locked at the beginning of a
request and is released only when the request is finished. You can
session forget; but when your app is complex, this is not feasibly or
natural.
True, but if there's a lot of complexity associated with when sessions
Thanks that will be helpful, I liked the idea of paths also. However if
you have 200,000 + nodes in a tree the paths might be coming hard to work
with.
Yes, my tree was somewhat smaller! It seems to me that with recursive is
the easiest method to manage this, although of course the
Ok so I have read the chapter I think the best option is postgres recursive
queries.
On Tue, Apr 10, 2012 at 10:39 AM, Richard Vézina
ml.richard.vez...@gmail.com wrote:
Yes AntiPattern cover 4 or 5 kind of tree representation, classify them
depending of usage and gives pros and cons.
I
Recursion is slow because the union... But it may fit yours need better I
don't know.
Richard
On Wed, Apr 11, 2012 at 9:45 AM, Bruce Wade bruce.w...@gmail.com wrote:
Ok so I have read the chapter I think the best option is postgres
recursive queries.
On Tue, Apr 10, 2012 at 10:39 AM,
Yeah not sure have never used recursion at the database level. However it
seems to be the only option, none of the other options in that chapter fit
my needs.
On Wed, Apr 11, 2012 at 7:05 AM, Richard Vézina ml.richard.vez...@gmail.com
wrote:
Recursion is slow because the union... But it may
I made a lot of that last year... Sometimes it was driving me nuts, needing
to have the same columns for each table in the union... Good naming
convention helps to make thing clearer when you get back to the code...
Note, I would try to make it with web2py first if I were needing to write
those
Yeah probably is I already have it with web2py but it is slow with 20,000
nodes takes around 5-10 seconds to load the results.
On Wed, Apr 11, 2012 at 7:29 AM, Richard Vézina ml.richard.vez...@gmail.com
wrote:
I made a lot of that last year... Sometimes it was driving me nuts,
needing to have
Maybe need some code optimisation... Also I would look at the database
level and make sure you have index on the right columns.
Those links could helps :
http://www.python.org/doc/essays/list2str.html
http://wiki.python.org/moin/PythonSpeed/PerformanceTips#Loops
Richard
On Wed, Apr 11, 2012
I've just been working with a tree myself. Database recursion (CTE) seems
very effective and is now supported by most of the larger DBs (although
sadly not Sqlite yet, I don't think).
This is the reference link for Postgres and a SQL query I wrote for
Firebird,
I thought it might vaguely
Thanks that will be helpful, I liked the idea of paths also. However if you
have 200,000 + nodes in a tree the paths might be coming hard to work with.
On Wed, Apr 11, 2012 at 11:58 AM, villas villa...@gmail.com wrote:
I've just been working with a tree myself. Database recursion (CTE) seems
Well I know the major bottle neck in the site is the binary tree. I am
still trying to figure out how to do this the best and most efficient.
Maybe this could be useful:
http://dirtsimple.org/2010/11/simplest-way-to-do-tree-based-queries.html
Yes AntiPattern cover 4 or 5 kind of tree representation, classify them
depending of usage and gives pros and cons.
I choose Closure table since it was one of the more complet for my case.
But if parent node is changing frequently it's not the more effecient tree
antipattern.
Have look it is
hi bruce,
what are the specs of your web servers and db server/s?
what web server are you using and how?
On Saturday, April 7, 2012 4:59:20 PM UTC+3, Detectedstealth wrote:
Hi,
So now that my site has been developed with web2py I am now looking to
release it this month. However also
Each server:
Ubuntu 10.04
2GB Ram
80GM HD
Currently running apache 2 however switching to nginx + uwsgi.
Database postgresql 9.1
On Sun, Apr 8, 2012 at 7:31 AM, Mengu whalb...@gmail.com wrote:
hi bruce,
what are the specs of your web servers and db server/s?
what web server are you using
The recipe book described a
class TreeProxy(object): ...
which implements unordered tree traversal. It is the most efficient way to
store records in a tree and retrieve them efficiently in a single query.
Anyway, if this is the bottle neck, you should definitively cache it
somehow.
On
Yeah the TreeProxy won't work in our case, as this is a MLM tree which
means there is really no order. For example:
1
2 10
1193 4
Etc, I tried to convince the CEO to use a balanced tree when we first
started programming but that wasn't an
Bruce,
It might help, maybe not, but Pragmatic Programmers has a book called SQL
AntiPatterns with Chapter 3 dedicated to tree structures in databases
http://pragprog.com/book/bksqla/sql-antipatterns
They show several alternatives to the usual starting point of adjacency
lists to describe
Thanks Ron I will take a look.
You are correct someone can sponsor more then 2 people, they can sponsor as
many people as they wish. However the sponsor tree and binary tree a very
different.
For example a binary tree has two legs, our company populates one leg, and
the opposite leg is up to the
Hello Bruce,
The bottle neck is always the database. 1M reqs/day is 7 reqs/minute. It is
not too much but make sure:
static files are not served by web2py
you cache as much as possible
If you need a code review to spot possible problems, let me know.
Massimo
On Saturday, 7 April 2012
1M reqs/day is 7 reqs/minute.
Now we know how Massimo gets so much done -- his days are 2400 hours long.
;-)
Thanks for the req/sec break down, we get most of our traffic in a 5-6 hour
window period.
A code review would be good I am sure there are a lot of areas that could
use some improvements.
--
Regards,
Bruce
On Sat, Apr 7, 2012 at 7:33 AM, Anthony abasta...@gmail.com wrote:
1M reqs/day is 7
Rightwhat? [changing color to red] Note to self. Never do math before
fully waking up.
Correction: 2 reqs/second/server. (1M/24/60/60/6 servers). Still ok from
web2py prospective.
If all requests hit the database once: 12 reqs/second. It really depends of
what those requests do.
Massimo
Well I know the major bottle neck in the site is the binary tree. I am
still trying to figure out how to do this the best and most efficient. If
this was a stand alone app in c/c++ this would all be in memory which
wouldn't be such a problem. However how this is implemented it needs to
query
You cannot just swith from the datastore to gae:sql. They are
different databases. To use the latter you need to setup an instance
for it. Google has documentation about it.
Anyway, that may solve your problem or make it worse. First of all you
need to identify why you have to many read
Here is my log from the Google App Engine local dev server. I don't
understand where all these 1s are being printed from and what they
mean. Some of these pages are just static and are not making any
Datastore calls in the controller. Any insight would be helpful. Thank
you.
2012-01-31
I suspect you have tables which use represent to look up the
representation of each record.
for example:
db.define_table('item',
Field('name'),
Field('owner','reference auth_user'))
{{=SQLFORM.grid(db.item)}}
The grid needs to represent each owner. It is a user to will look up
the
50 matches
Mail list logo