** Description changed:

  [Impact]
  Applications using package python-tornado v5.1.1 or earlier are susceptible 
to an out of memory error related to websockets.
  
  [Other Info]
  
  Upstream commit(s):
  
https://github.com/tornadoweb/tornado/pull/2351/commits/20becca336caae61cd24f7afba0e177c0a210c70
  
  $ git remote -v
  origin        https://github.com/tornadoweb/tornado.git (fetch)
  origin        https://github.com/tornadoweb/tornado.git (push)
  
  $ git describe --contains 20becca3
  v5.1.0b1~28^2~1
  
  $ rmadison python3-tornardo
   => python3-tornado | 4.2.1-1ubuntu3      | xenial
   python3-tornado | 4.5.3-1             | bionic/universe
   => python3-tornado | 4.5.3-1ubuntu0.1    | bionic-updates/universe
   python3-tornado | 6.0.3+really5.1.1-3 | focal/universe
   python3-tornado | 6.0.4-2             | groovy/universe
   python3-tornado | 6.0.4-3             | hirsute/universe
   python3-tornado | 6.1.0-1             | hirsute-proposed/universe
  
  [Original Description]
  
  Tornado has no 'flow control' for websockets. A websocket will receive data 
as fast as it can, and store the data in a deque. If that data is not consumed 
as fast as it is written, then that deque will grow in size indefinitely, 
ultimately leading to a memory error and killing the process.
  Fix is to use a Queue. Read and get messages from the queue on the client 
side.
  
  Patch file [0]
  Commit history [1]
  GitHub [2]
  Issue [3]
  
  [0] 
https://patch-diff.githubusercontent.com/raw/tornadoweb/tornado/pull/2351.patch
  [1] https://github.com/tornadoweb/tornado/pull/2351/commits
  [2] https://github.com/tornadoweb/tornado
  [3] https://github.com/tornadoweb/tornado/issues/2341
  
  [Test Case]
  
  I will be attaching two python test files.
  client.py
  server.py
  
  # create lxc container & limits on memory and turn off swap
  $ sudo apt-get install lxc lxd
  $ lxd init
  $ lxc launch ubuntu:18.04 bionic-python-tornado
  
  # shrink server size
  lxc config set server limits.cpu 2
  # changes ram setting
  lxc config set server limits.memory 150MB
  # severely limits amount of swap used [4]
  lxc config set bionic-py-tornado limits.memory.swap false
  
  # install dev tools and download source code
  $ lxc exec bionic-python-tornado bash
  $ apt-get update
  $ apt install ubuntu-dev-tools -y
  $ pull-lp-source python-tornado bionic
  $ sudo apt build-dep .
  
  # copy client.py and server.py to
  # $ ~/python-tornado-4.5.3/demos
  $ scp or touch client.py and server.py
  
  # build code
  $ python3 setup.py build
  $ python3 setup.py install
  
  # I have 3 terminals open
  2 for executing python, one for the client and one for server
  and another one using top to view memory constraints
  
  # run server.py, client.py, and top in separate terminals
  $ python3 demos/client.py
  $ python3 demos/server.py
  $ top
  
  What gets print out in the client.py is the length of the
  collections.deque
  
  In the server.py prints out messages like:
  message: keep alive
  
  * press ctrl+E for showing memory in MB in the terminal with top
  top - shows that swap is off/ running very low and our memory is only 150MB
  
  Although I never hit the oom exception that is expected to be thrown,
  you can check dmesg
  $ sudo dmesg | grep -i python
  
  looks similar to this:
  [ 3250.067833] 
oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=lxc.payload.iptest,mems_allowed=0,oom_memcg=/lxc.payload.iptest,task_memcg=/lxc.payload.iptest,task=python3,pid=44889,uid=1000000
  [ 3250.067838] Memory cgroup out of memory: Killed process 44889 (python3) 
total-vm:304616kB, anon-rss:235152kB, file-rss:0kB, shmem-rss:0kB, UID:1000000 
pgtables:628kB oom_score_adj:0
  [ 3250.075096] oom_reaper: reaped process 44889 (python3), now anon-rss:0kB, 
file-rss:0kB, shmem-rss:0k
  
  After either adding the patch or running focal or later versions
  *pull-lp-source python-tornado focal
  
  We can run the exact same setup again, and this time it shows that the
  new queue object has only a length of 1.
  
+ We have shown that before the patch, what was used to store messages in
+ the queue was unbounded and could grow "If maxlen is not specified or is
+ None, deques may grow to an arbitrary length over time."[6] Afterwards
+ they decided using a blocking queue with size 1. (Queue(1)), where there
+ is only ever 1 item in the queue at a time. [7]
  
  [4] 
https://discuss.linuxcontainers.org/t/limiting-swap-in-lxc-container/6343/4
  [5] attached screenshot
+ [6] 
https://docs.python.org/3/library/collections.html?highlight=collections%20deque#collections.deque
 
+ [7] https://www.tornadoweb.org/en/stable/queues.html

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1903733

Title:
  Out of memory issue for websocket client

Status in python-tornado package in Ubuntu:
  Fix Released
Status in python-tornado source package in Xenial:
  In Progress
Status in python-tornado source package in Bionic:
  In Progress
Status in python-tornado source package in Focal:
  Fix Released
Status in python-tornado source package in Groovy:
  Fix Released
Status in python-tornado source package in Hirsute:
  Fix Released

Bug description:
  [Impact]
  Applications using package python-tornado v5.1.1 or earlier are susceptible 
to an out of memory error related to websockets.

  [Other Info]

  Upstream commit(s):
  
https://github.com/tornadoweb/tornado/pull/2351/commits/20becca336caae61cd24f7afba0e177c0a210c70

  $ git remote -v
  origin        https://github.com/tornadoweb/tornado.git (fetch)
  origin        https://github.com/tornadoweb/tornado.git (push)

  $ git describe --contains 20becca3
  v5.1.0b1~28^2~1

  $ rmadison python3-tornardo
   => python3-tornado | 4.2.1-1ubuntu3      | xenial
   python3-tornado | 4.5.3-1             | bionic/universe
   => python3-tornado | 4.5.3-1ubuntu0.1    | bionic-updates/universe
   python3-tornado | 6.0.3+really5.1.1-3 | focal/universe
   python3-tornado | 6.0.4-2             | groovy/universe
   python3-tornado | 6.0.4-3             | hirsute/universe
   python3-tornado | 6.1.0-1             | hirsute-proposed/universe

  [Original Description]

  Tornado has no 'flow control' for websockets. A websocket will receive data 
as fast as it can, and store the data in a deque. If that data is not consumed 
as fast as it is written, then that deque will grow in size indefinitely, 
ultimately leading to a memory error and killing the process.
  Fix is to use a Queue. Read and get messages from the queue on the client 
side.

  Patch file [0]
  Commit history [1]
  GitHub [2]
  Issue [3]

  [0] 
https://patch-diff.githubusercontent.com/raw/tornadoweb/tornado/pull/2351.patch
  [1] https://github.com/tornadoweb/tornado/pull/2351/commits
  [2] https://github.com/tornadoweb/tornado
  [3] https://github.com/tornadoweb/tornado/issues/2341

  [Test Case]

  I will be attaching two python test files.
  client.py
  server.py

  # create lxc container & limits on memory and turn off swap
  $ sudo apt-get install lxc lxd
  $ lxd init
  $ lxc launch ubuntu:18.04 bionic-python-tornado

  # shrink server size
  lxc config set server limits.cpu 2
  # changes ram setting
  lxc config set server limits.memory 150MB
  # severely limits amount of swap used [4]
  lxc config set bionic-py-tornado limits.memory.swap false

  # install dev tools and download source code
  $ lxc exec bionic-python-tornado bash
  $ apt-get update
  $ apt install ubuntu-dev-tools -y
  $ pull-lp-source python-tornado bionic
  $ sudo apt build-dep .

  # copy client.py and server.py to
  # $ ~/python-tornado-4.5.3/demos
  $ scp or touch client.py and server.py

  # build code
  $ python3 setup.py build
  $ python3 setup.py install

  # I have 3 terminals open
  2 for executing python, one for the client and one for server
  and another one using top to view memory constraints

  # run server.py, client.py, and top in separate terminals
  $ python3 demos/client.py
  $ python3 demos/server.py
  $ top

  What gets print out in the client.py is the length of the
  collections.deque

  In the server.py prints out messages like:
  message: keep alive

  * press ctrl+E for showing memory in MB in the terminal with top
  top - shows that swap is off/ running very low and our memory is only 150MB

  Although I never hit the oom exception that is expected to be thrown,
  you can check dmesg
  $ sudo dmesg | grep -i python

  looks similar to this:
  [ 3250.067833] 
oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=lxc.payload.iptest,mems_allowed=0,oom_memcg=/lxc.payload.iptest,task_memcg=/lxc.payload.iptest,task=python3,pid=44889,uid=1000000
  [ 3250.067838] Memory cgroup out of memory: Killed process 44889 (python3) 
total-vm:304616kB, anon-rss:235152kB, file-rss:0kB, shmem-rss:0kB, UID:1000000 
pgtables:628kB oom_score_adj:0
  [ 3250.075096] oom_reaper: reaped process 44889 (python3), now anon-rss:0kB, 
file-rss:0kB, shmem-rss:0k

  After either adding the patch or running focal or later versions
  *pull-lp-source python-tornado focal

  We can run the exact same setup again, and this time it shows that the
  new queue object has only a length of 1.

  We have shown that before the patch, what was used to store messages
  in the queue was unbounded and could grow "If maxlen is not specified
  or is None, deques may grow to an arbitrary length over time."[6]
  Afterwards they decided using a blocking queue with size 1.
  (Queue(1)), where there is only ever 1 item in the queue at a time.
  [7]

  [4] 
https://discuss.linuxcontainers.org/t/limiting-swap-in-lxc-container/6343/4
  [5] attached screenshot
  [6] 
https://docs.python.org/3/library/collections.html?highlight=collections%20deque#collections.deque
 
  [7] https://www.tornadoweb.org/en/stable/queues.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-tornado/+bug/1903733/+subscriptions

-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to     : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp

Reply via email to