Why do I see digest replies to posts I never saw in a digest? [was: RE: Why I fail so bad to check for memory leak with this code?]

2022-07-23 Thread pjfarley3
OT to the original subject, but can anyone explain to me why in the forum 
digest emails I receive I often see a reply to a post that I never saw the 
original of in any prior digest?

Peter

> -Original Message-
> From: Marco Sulla 
> Sent: Friday, July 22, 2022 3:41 PM
> To: Barry 
> Cc: MRAB ; Python-list@python.org
> Subject: Re: Why I fail so bad to check for memory leak with this code?
> 
> On Fri, 22 Jul 2022 at 09:00, Barry  wrote:
> > With code as complex as python’s there will be memory allocations that
> occur that will not be directly related to the python code you test.
> >
> > To put it another way there is noise in your memory allocation signal.
> >
> > Usually the signal of a memory leak is very clear, as you noticed.
> >
> > For rare leaks I would use a tool like valgrind.
> 
> Thank you all, but I needed a simple decorator to automatize the memory leak
> (and segfault) tests. I think that this version is good enough, I hope that 
> can be
> useful to someone:
 
--

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why I fail so bad to check for memory leak with this code?

2022-07-22 Thread Marco Sulla
On Fri, 22 Jul 2022 at 09:00, Barry  wrote:
> With code as complex as python’s there will be memory allocations that
occur that will not be directly related to the python code you test.
>
> To put it another way there is noise in your memory allocation signal.
>
> Usually the signal of a memory leak is very clear, as you noticed.
>
> For rare leaks I would use a tool like valgrind.

Thank you all, but I needed a simple decorator to automatize the memory
leak (and segfault) tests. I think that this version is good enough, I hope
that can be useful to someone:

def trace(iterations=100):
def decorator(func):
def wrapper():
print(
f"Loops: {iterations} - Evaluating: {func.__name__}",
flush=True
)

tracemalloc.start()

snapshot1 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)

for i in range(iterations):
func()

gc.collect()

snapshot2 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)

top_stats = snapshot2.compare_to(snapshot1, 'lineno')
tracemalloc.stop()

for stat in top_stats:
if stat.count_diff * 100 > iterations:
raise ValueError(f"stat: {stat}")

return wrapper

return decorator


If the decorated function fails, you can try to raise the iterations
parameter. I found that in my cases sometimes I needed a value of 200 or 300
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why I fail so bad to check for memory leak with this code?

2022-07-22 Thread Barry



> On 21 Jul 2022, at 21:54, Marco Sulla  wrote:
> On Thu, 21 Jul 2022 at 22:28, MRAB  wrote:
>> 
>> It's something to do with pickling iterators because it still occurs
>> when I reduce func_76 to:
>> 
>> @trace
>> def func_76():
>> pickle.dumps(iter([]))
> 
> It's too strange. I found a bunch of true memory leaks with this
> decorator. It seems to be reliable. It's correct with pickle and with
> iter, but not when pickling iters.

With code as complex as python’s there will be memory allocations that occur 
that will not be directly related to the python code you test.

To put it another way there is noise in your memory allocation signal.

Usually the signal of a memory leak is very clear, as you noticed.

For rare leaks I would use a tool like valgrind.

Barry

> -- 
> https://mail.python.org/mailman/listinfo/python-list

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why I fail so bad to check for memory leak with this code?

2022-07-21 Thread MRAB

On 21/07/2022 23:39, Marco Sulla wrote:

I've done this other simple test:

#!/usr/bin/env python3

import tracemalloc
import gc
import pickle

tracemalloc.start()

snapshot1 = tracemalloc.take_snapshot().filter_traces(
    (tracemalloc.Filter(True, __file__), )
)

for i in range(1000):
    pickle.dumps(iter([]))

gc.collect()

snapshot2 = tracemalloc.take_snapshot().filter_traces(
    (tracemalloc.Filter(True, __file__), )
)

top_stats = snapshot2.compare_to(snapshot1, 'lineno')
tracemalloc.stop()

for stat in top_stats:
    print(stat)

The result is:

/home/marco/sources/test.py:14: size=3339 B (+3339 B), count=63 (+63), 
average=53 B
/home/marco/sources/test.py:9: size=464 B (+464 B), count=1 (+1), 
average=464 B
/home/marco/sources/test.py:10: size=456 B (+456 B), count=1 (+1), 
average=456 B
/home/marco/sources/test.py:13: size=28 B (+28 B), count=1 (+1), 
average=28 B


It seems that, after 10 million loops, only 63 have a leak, with only 
~3 KB. It seems to me that we can't call it a leak, no? Probably 
pickle needs a lot more cycles to be sure there's actually a real leakage.
If it was a leak, then the amount of memory used or the counts would 
increase with increasing iterations. If that's not happening, if the 
memory used and the counts stay the roughly the same, then it's probably 
not a leak, unless it's a leak of something that happens only once, such 
as creating a cache or buffer on first use.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Why I fail so bad to check for memory leak with this code?

2022-07-21 Thread Marco Sulla
I've done this other simple test:

#!/usr/bin/env python3

import tracemalloc
import gc
import pickle

tracemalloc.start()

snapshot1 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)

for i in range(1000):
pickle.dumps(iter([]))

gc.collect()

snapshot2 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)

top_stats = snapshot2.compare_to(snapshot1, 'lineno')
tracemalloc.stop()

for stat in top_stats:
print(stat)

The result is:

/home/marco/sources/test.py:14: size=3339 B (+3339 B), count=63 (+63),
average=53 B
/home/marco/sources/test.py:9: size=464 B (+464 B), count=1 (+1),
average=464 B
/home/marco/sources/test.py:10: size=456 B (+456 B), count=1 (+1),
average=456 B
/home/marco/sources/test.py:13: size=28 B (+28 B), count=1 (+1), average=28
B

It seems that, after 10 million loops, only 63 have a leak, with only ~3
KB. It seems to me that we can't call it a leak, no? Probably pickle needs
a lot more cycles to be sure there's actually a real leakage.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why I fail so bad to check for memory leak with this code?

2022-07-21 Thread Marco Sulla
This naif code shows no leak:

import resource
import pickle

c = 0

while True:
pickle.dumps(iter([]))

if (c % 1) == 0:
max_rss = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
print(f"iteration: {c}, max rss: {max_rss} kb")

c += 1
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why I fail so bad to check for memory leak with this code?

2022-07-21 Thread Marco Sulla
On Thu, 21 Jul 2022 at 22:28, MRAB  wrote:
>
> It's something to do with pickling iterators because it still occurs
> when I reduce func_76 to:
>
> @trace
> def func_76():
>  pickle.dumps(iter([]))

It's too strange. I found a bunch of true memory leaks with this
decorator. It seems to be reliable. It's correct with pickle and with
iter, but not when pickling iters.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why I fail so bad to check for memory leak with this code?

2022-07-21 Thread MRAB

On 21/07/2022 20:47, Marco Sulla wrote:

I tried to check for memory leaks in a bunch of functions of mine using a
simple decorator. It works, but it fails with this code, returning a random
count_diff at every run. Why?

import tracemalloc
import gc
import functools
from uuid import uuid4
import pickle

def getUuid():
 return str(uuid4())

def trace(func):
 @functools.wraps(func)
 def inner():
 tracemalloc.start()

 snapshot1 = tracemalloc.take_snapshot().filter_traces(
 (tracemalloc.Filter(True, __file__), )
 )

 for i in range(100):
 func()

 gc.collect()

 snapshot2 = tracemalloc.take_snapshot().filter_traces(
 (tracemalloc.Filter(True, __file__), )
 )

 top_stats = snapshot2.compare_to(snapshot1, 'lineno')
 tracemalloc.stop()

 for stat in top_stats:
 if stat.count_diff > 3:
 raise ValueError(f"count_diff: {stat.count_diff}")

 return inner

dict_1 = {getUuid(): i for i in range(1000)}

@trace
def func_76():
 pickle.dumps(iter(dict_1))

func_76()


It's something to do with pickling iterators because it still occurs 
when I reduce func_76 to:


@trace
def func_76():
pickle.dumps(iter([]))
--
https://mail.python.org/mailman/listinfo/python-list


Why I fail so bad to check for memory leak with this code?

2022-07-21 Thread Marco Sulla
I tried to check for memory leaks in a bunch of functions of mine using a
simple decorator. It works, but it fails with this code, returning a random
count_diff at every run. Why?

import tracemalloc
import gc
import functools
from uuid import uuid4
import pickle

def getUuid():
return str(uuid4())

def trace(func):
@functools.wraps(func)
def inner():
tracemalloc.start()

snapshot1 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)

for i in range(100):
func()

gc.collect()

snapshot2 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)

top_stats = snapshot2.compare_to(snapshot1, 'lineno')
tracemalloc.stop()

for stat in top_stats:
if stat.count_diff > 3:
raise ValueError(f"count_diff: {stat.count_diff}")

return inner

dict_1 = {getUuid(): i for i in range(1000)}

@trace
def func_76():
pickle.dumps(iter(dict_1))

func_76()
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Debugging a memory leak

2020-10-23 Thread Dieter Maurer
Pasha Stetsenko wrote at 2020-10-23 11:32 -0700:
> ...
>  static int my_init(PyObject*, PyObject*, PyObject*) { return 0; }
>  static void my_dealloc(PyObject*) {}

I think, the `dealloc` function is responsible to actually
free the memory area.

I see for example:
static void
Spec_dealloc(Spec* self)
{
/* PyType_GenericAlloc that you get when you don't
   specify a tp_alloc always tracks the object. */
PyObject_GC_UnTrack((PyObject *)self);
if (self->weakreflist != NULL) {
PyObject_ClearWeakRefs(OBJECT(self));
}
Spec_clear(self);
Py_TYPE(self)->tp_free(OBJECT(self));
}


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Debugging a memory leak

2020-10-23 Thread Pasha Stetsenko
Thanks MRAB, this was it.
I guess I was thinking about tp_dealloc as a C++ destructor, where the base
class' destructor is called automatically.

On Fri, Oct 23, 2020 at 11:59 AM MRAB  wrote:

> On 2020-10-23 19:32, Pasha Stetsenko wrote:
> > Thanks for all the replies!
> > Following Chris's advice, I tried to reduce the code to the smallest
> > reproducible example (I guess I should have done it sooner),
> > but here's what I came up with:
> > ```
> >#include 
> >#include 
> >
> >static int my_init(PyObject*, PyObject*, PyObject*) { return 0; }
> >static void my_dealloc(PyObject*) {}
> >
> >static void init_mytype(PyObject* module) {
> >  PyTypeObject* type = new PyTypeObject();
> >  std::memset(type, 0, sizeof(PyTypeObject));
> >  Py_INCREF(type);
> >
> >  type->tp_basicsize = static_cast(sizeof(PyObject));
> >  type->tp_itemsize = 0;
> >  type->tp_flags = Py_TPFLAGS_DEFAULT;
> >  type->tp_new   = &PyType_GenericNew;
> >  type->tp_name  = "mytype";
> >  type->tp_doc   = "[temporary]";
> >  type->tp_init  = my_init;
> >  type->tp_dealloc = my_dealloc;
> >  PyType_Ready(type);
> >  PyModule_AddObject(module, "mytype",
> reinterpret_cast(type));
> >}
> > ```
>
> You're setting the deallocation function to 'my_dealloc', but that
> function isn't deallocating the object.
>
> Try something like this:
>
> static void my_dealloc(PyObject* obj) {
>  PyObject_DEL(obj);
> }
>
> [snip]
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Debugging a memory leak

2020-10-23 Thread MRAB

On 2020-10-23 19:32, Pasha Stetsenko wrote:

Thanks for all the replies!
Following Chris's advice, I tried to reduce the code to the smallest
reproducible example (I guess I should have done it sooner),
but here's what I came up with:
```
   #include 
   #include 

   static int my_init(PyObject*, PyObject*, PyObject*) { return 0; }
   static void my_dealloc(PyObject*) {}

   static void init_mytype(PyObject* module) {
 PyTypeObject* type = new PyTypeObject();
 std::memset(type, 0, sizeof(PyTypeObject));
 Py_INCREF(type);

 type->tp_basicsize = static_cast(sizeof(PyObject));
 type->tp_itemsize = 0;
 type->tp_flags = Py_TPFLAGS_DEFAULT;
 type->tp_new   = &PyType_GenericNew;
 type->tp_name  = "mytype";
 type->tp_doc   = "[temporary]";
 type->tp_init  = my_init;
 type->tp_dealloc = my_dealloc;
 PyType_Ready(type);
 PyModule_AddObject(module, "mytype", reinterpret_cast(type));
   }
```


You're setting the deallocation function to 'my_dealloc', but that 
function isn't deallocating the object.


Try something like this:

static void my_dealloc(PyObject* obj) {
PyObject_DEL(obj);
}

[snip]
--
https://mail.python.org/mailman/listinfo/python-list


Re: Debugging a memory leak

2020-10-23 Thread Pasha Stetsenko
Thanks for all the replies!
Following Chris's advice, I tried to reduce the code to the smallest
reproducible example (I guess I should have done it sooner),
but here's what I came up with:
```
  #include 
  #include 

  static int my_init(PyObject*, PyObject*, PyObject*) { return 0; }
  static void my_dealloc(PyObject*) {}

  static void init_mytype(PyObject* module) {
PyTypeObject* type = new PyTypeObject();
std::memset(type, 0, sizeof(PyTypeObject));
Py_INCREF(type);

type->tp_basicsize = static_cast(sizeof(PyObject));
type->tp_itemsize = 0;
type->tp_flags = Py_TPFLAGS_DEFAULT;
type->tp_new   = &PyType_GenericNew;
type->tp_name  = "mytype";
type->tp_doc   = "[temporary]";
type->tp_init  = my_init;
type->tp_dealloc = my_dealloc;
PyType_Ready(type);
PyModule_AddObject(module, "mytype", reinterpret_cast(type));
  }
```
(my original `update` object had some fields in it, but it turns out they
don't need to be present in order for the problem to manifest. So in this
case I'm creating a custom object which is the same as basic PyObject).
The `init_mytype()` function creates a custom type and attaches it to a
module. After this, creating 100M instances of the object will cause the
process memory to swell to 1.5G:
```
for i  in range(10**8):
z = dt.mytype()
```
I know this is not normal because if instead i used a builtin type such as
`list`, or a python-defined class such as `class  A: pass`, then the
process will remain at steady RAM usage of about 6Mb.

I've  tested this on a Linux platform as well (using docker image
quay.io/pypa/manylinux2010_x86_64), and the problem is present there as
well.

---
PS: The library I'm working on is open source, available at
https://github.com/h2oai/datatable, but the code I posted  above is
completely independent from my library.

On Fri, Oct 23, 2020 at 10:44 AM Dieter Maurer  wrote:

> Pasha Stetsenko wrote at 2020-10-22 17:51 -0700:
> > ...
> >I'm a maintainer of a python library "datatable" (can be installed from
> >PyPi), and i've been recently trying to debug a memory leak that occurs in
> >my library.
> >The program that exposes the leak is quite simple:
> >```
> >import datatable as dt
> >import gc  # just in case
> >
> >def leak(n=10**7):
> >for i in range(n):
> >z = dt.update()
> >
> >leak()
> >gc.collect()
> >input("Press enter")
> >```
> >Note that despite the name, the `dt.update` is actually a class, though it
> >is defined via Python C API. Thus, this script is expected to create and
> >then immediately destroy 10 million simple python objects.
> >The observed behavior, however,  is  that the script consumes more and
> more
> >memory, eventually ending up at about 500M. The amount of memory the
> >program ends up consuming is directly proportional to the parameter `n`.
> >
> >The `gc.get_objects()` does not show any extra objects however.
>
> For efficiency reasons, the garbage collector treats only
> objects from types which are known to be potentially involved in cycles.
> A type implemented in "C" must define `tp_traverse` (in its type
> structure) to indicate this possibility.
> `tp_traverse` also tells the garbage collector how to find referenced
> objects.
> You will never find an object in the result of `get_objects` the
> type of which does not define `tp_traverse`.
>
> > ...
> >Thus, the object didn't actually "leak" in the normal sense: its refcount
> >is 0 and it was reclaimed by the Python runtime (when i print a debug
> >message in tp_dealloc, i see that the destructor gets called every time).
> >Still, Python keeps requesting more and more memory from the system
> instead
> >of reusing the memory  that was supposed to be freed.
>
> I would try to debug what happens further in `tp_dealloc` and its callers.
> You should eventually see a `PyMem_free` which gives the memory back
> to the Python memory management (built on top of the C memory management).
>
> Note that your `tp_dealloc` should not call the "C" library's "free".
> Python builds its own memory management (--> "PyMem_*") on top
> of the "C" library. It handles all "small" memory requests
> and, if necessary, requests big data chunks via `malloc` to split
> them into the smaller sizes.
> Should you "free" small memory blocks directly via "free", that memory
> becomes effectively unusable by Python (unless you have a special
> allocation as well).
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Debugging a memory leak

2020-10-23 Thread Dieter Maurer
Pasha Stetsenko wrote at 2020-10-22 17:51 -0700:
> ...
>I'm a maintainer of a python library "datatable" (can be installed from
>PyPi), and i've been recently trying to debug a memory leak that occurs in
>my library.
>The program that exposes the leak is quite simple:
>```
>import datatable as dt
>import gc  # just in case
>
>def leak(n=10**7):
>for i in range(n):
>z = dt.update()
>
>leak()
>gc.collect()
>input("Press enter")
>```
>Note that despite the name, the `dt.update` is actually a class, though it
>is defined via Python C API. Thus, this script is expected to create and
>then immediately destroy 10 million simple python objects.
>The observed behavior, however,  is  that the script consumes more and more
>memory, eventually ending up at about 500M. The amount of memory the
>program ends up consuming is directly proportional to the parameter `n`.
>
>The `gc.get_objects()` does not show any extra objects however.

For efficiency reasons, the garbage collector treats only
objects from types which are known to be potentially involved in cycles.
A type implemented in "C" must define `tp_traverse` (in its type
structure) to indicate this possibility.
`tp_traverse` also tells the garbage collector how to find referenced
objects.
You will never find an object in the result of `get_objects` the
type of which does not define `tp_traverse`.

> ...
>Thus, the object didn't actually "leak" in the normal sense: its refcount
>is 0 and it was reclaimed by the Python runtime (when i print a debug
>message in tp_dealloc, i see that the destructor gets called every time).
>Still, Python keeps requesting more and more memory from the system instead
>of reusing the memory  that was supposed to be freed.

I would try to debug what happens further in `tp_dealloc` and its callers.
You should eventually see a `PyMem_free` which gives the memory back
to the Python memory management (built on top of the C memory management).

Note that your `tp_dealloc` should not call the "C" library's "free".
Python builds its own memory management (--> "PyMem_*") on top
of the "C" library. It handles all "small" memory requests
and, if necessary, requests big data chunks via `malloc` to split
them into the smaller sizes.
Should you "free" small memory blocks directly via "free", that memory
becomes effectively unusable by Python (unless you have a special
allocation as well).
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Debugging a memory leak

2020-10-22 Thread Karen Shaeffer via Python-list



> On Oct 22, 2020, at 5:51 PM, Pasha Stetsenko  wrote:
> 
> Dear Python gurus,
> 
> I'm a maintainer of a python library "datatable" (can be installed from
> PyPi), and i've been recently trying to debug a memory leak that occurs in
> my library.
> The program that exposes the leak is quite simple:
> ```
> import datatable as dt
> import gc  # just in case
> 
> def leak(n=10**7):
>for i in range(n):
>z = dt.update()
> 
> leak()
> gc.collect()
> input("Press enter")
> ```

Hi Pasha,
dt.update() is acting on some object(s) outside the leak function body. And so 
even though, local objects z, i and n are eventually garbage collected, the 
side-effects of dt.update() are not affected by the return from the leak 
function. You need to look at your module and carefully trace what happens when 
dt.update() is executed. It seems to me that any memory consumed when 
dt.update() is executed will not be released when the leak function returns.

humbly,
Karen

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Debugging a memory leak

2020-10-22 Thread Chris Angelico
On Fri, Oct 23, 2020 at 12:20 PM Pasha Stetsenko  wrote:
> I'm currently not sure where to go from here. Is there something wrong with
> my python object that prevents it from being correctly processed by the
> Python runtime? Because this doesn't seem to be the usual case of
> incrementing the refcount too many times.

Hard to say without seeing the source code. Is your code available anywhere?

A few things to test:

1) Can you replicate this behaviour with only standard library
classes? Try to find something implemented in C that uses tp_dealloc
in a similar way to you.
2) Can you replicate this with an extremely simple cut-down class, and
then publish the code for that class along with your question?
3) Does this happen on other operating systems or only on Mac OS? If
you can't yourself test this, hopefully posting code from the other
two questions will allow other people to try it.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Debugging a memory leak

2020-10-22 Thread Pasha Stetsenko
Dear Python gurus,

I'm a maintainer of a python library "datatable" (can be installed from
PyPi), and i've been recently trying to debug a memory leak that occurs in
my library.
The program that exposes the leak is quite simple:
```
import datatable as dt
import gc  # just in case

def leak(n=10**7):
for i in range(n):
z = dt.update()

leak()
gc.collect()
input("Press enter")
```
Note that despite the name, the `dt.update` is actually a class, though it
is defined via Python C API. Thus, this script is expected to create and
then immediately destroy 10 million simple python objects.
The observed behavior, however,  is  that the script consumes more and more
memory, eventually ending up at about 500M. The amount of memory the
program ends up consuming is directly proportional to the parameter `n`.

The `gc.get_objects()` does not show any extra objects however. The
`tracemalloc` module shows that there are indeed `n` objects leaked in the
`z=dt.update()` line, but doesn't give any extra details.

In order to dig deeper, I let the process wait on the "input()" line, and
wrote a script to dump the process' memory into a file. Then I scanned
through the file looking at any repeated patterns of 64-bit words. Inside
the memory dump, the following sequences were the most common:
```
  0x - 28660404
  0x0001024be6e8 - 4999762
  0x000101cbdea0 - 119049
  0x0054 - 59537
  0x0fd00ff0 - 59526
  0x0001 - 16895
  0x - 12378
  ...
```
The most suspicious sequence here is 0x0001024be6e8, which if you look
at that address with lldb, is the address of the PyTypeObject "dt.update",
which looks approximately like this:
```
(lldb) p *(PyTypeObject*)(0x00010f4206e8)
(PyTypeObject) $0 = {
  ob_base = {
ob_base = {
  ob_refcnt = 8
  ob_type = 0x00010ec216b0
}
ob_size = 0
  }
  tp_name = 0x00010f3a442c "datatable.update"
  tp_basicsize = 48
  tp_itemsize = 0
  tp_dealloc = 0x00010f0a8040 (_datatable.cpython-36m-darwin.so`void
py::_safe_dealloc(_object*) at
xobject.h:270)
  tp_print = 0x
  tp_getattr = 0x
  tp_setattr = 0x
  tp_as_async = 0x
  tp_repr = 0x00010eab3fa0 (Python`object_repr)
  tp_as_number = 0x
  tp_as_sequence = 0x
  tp_as_mapping = 0x
  tp_hash = 0x00010eb48640 (Python`_Py_HashPointer)
  tp_call = 0x
  tp_str = 0x00010eab40d0 (Python`object_str)
  tp_getattro = 0x00010eaa1ae0 (Python`PyObject_GenericGetAttr)
  tp_setattro = 0x00010eaa1ce0 (Python`PyObject_GenericSetAttr)
  tp_as_buffer = 0x
  tp_flags = 266240
...
```
Thus, I can be quite certain that 0x1024be6e8 is the address of the
`dt.update` type structure.

The way this address appears in the memory dump looks like this:
```
0x7f97875cbb10: 0x 0x 0x024be6e8 0x0001
0x7f97875cbb20: 0x 0x 0x 0x
0x7f97875cbb30: 0x 0x 0x 0x
0x7f97875cbb40: 0x 0x 0x024be6e8 0x0001
0x7f97875cbb50: 0x 0x 0x 0x
0x7f97875cbb60: 0x 0x 0x 0x
0x7f97875cbb70: 0x 0x 0x024be6e8 0x0001
0x7f97875cbb80: 0x 0x 0x 0x
0x7f97875cbb90: 0x 0x 0x 0x
0x7f97875cbba0: 0x 0x 0x024be6e8 0x0001
0x7f97875cbbb0: 0x 0x 0x 0x
0x7f97875cbbc0: 0x 0x 0x 0x
```
If i guess that all these represent the leaked objects, then inspecting any
of them shows the following:
```
(lldb) p *(PyObject*)(0x7f97875cbb10)
(PyObject) $2 = {
  ob_refcnt = 0
  ob_type = 0x00010f4206e8
}
```
Thus, the object didn't actually "leak" in the normal sense: its refcount
is 0 and it was reclaimed by the Python runtime (when i print a debug
message in tp_dealloc, i see that the destructor gets called every time).
Still, Python keeps requesting more and more memory from the system instead
of reusing the memory  that was supposed to be freed.

I'm currently not sure where to go from here. Is there something wrong with
my python object that prevents it from being correctly processed by the
Python runtime? Because this doesn't seem to be the usual case of
incrementing the refcount too many times.

This behavior was observed in Python 3.6.6 and also Python 3.8.0b2.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tracking a memory leak in C extension - interpreting the output of PYTHONMALLOCSTATS

2018-07-28 Thread Stefan Behnel
Bartosz Golaszewski schrieb am 24.07.2018 um 13:05:
> Ok I've found the problem and it's my fault. From tp_dealloc's documentation:
> 
> ---
> The destructor function should free all references which the instance
> owns, free all memory buffers owned by the instance (using the freeing
> function corresponding to the allocation function used to allocate the
> buffer), and finally (as its last action) call the type’s tp_free
> function.
> ---
> 
> I'm not calling the tp_free function...

If you want to avoid the little traps of the C-API in the future, give
Cython a try. It can generate all the glue code safely for you, and
probably also generates faster wrapper code than you would write yourself.

Stefan

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tracking a memory leak in C extension - interpreting the output of PYTHONMALLOCSTATS

2018-07-24 Thread Bartosz Golaszewski
2018-07-24 13:30 GMT+02:00 Bartosz Golaszewski :
> 2018-07-24 12:09 GMT+02:00 Bartosz Golaszewski :
>> 2018-07-23 21:51 GMT+02:00 Thomas Jollans :
>>> On 23/07/18 20:02, Bartosz Golaszewski wrote:
>>>> Hi!
>>>
>>> Hey!
>>>
>>>> A user recently reported a memory leak in python bindings (C extension
>>>> module) to a C library[1] I wrote. I've been trying to fix it since
>>>> but so far without success. Since I'm probably dealing with a space
>>>> leak rather than actual memory leak, valgrind didn't help much even
>>>> when using malloc as allocator. I'm now trying to use
>>>> PYTHONMALLOCSTATS but need some help on how to interpret the output
>>>> emitted it's enabled.
>>>
>>> Oh dear.
>>>
>>>>
>>>> [snip]
>>>>
>>>> The number of pools in arena 53 continuously grows. Its size column
>>>> says: 432. I couldn't find any documentation on what it means but I
>>>> assume it's an allocation of 432 bytes. [...]
>>>
>>> I had a quick look at the code (because what else does one do for fun);
>>> I don't understand much, but what I can tell you is that
>>>  (a) yes, that is an allocation size in bytes, and
>>>  (b) as you can see, it uses intervals of 8. This means that pool 53
>>>  is used for allocations of 424 < nbytes <= 432 bytes. Maybe your
>>>  breakpoint needs tweaking.
>>>  (c) Try breaking on _PyObject_Malloc or pymalloc_alloc. I think they're
>>>  called by both PyMem_Malloc and PyObject_Malloc.
>>>
>>> int _PyObject_DebugMallocStats(FILE *out)
>>>
>>> https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L2435
>>>
>>> static int pymalloc_alloc(void *ctx, void **ptr_p, size_t nbytes)
>>>
>>> https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L1327
>>>
>>>
>>> Have fun debugging!
>>>
>>> -- Thomas
>>>
>>>
>
> [snip!]
>
>>
>> I don't see any other allocation of this size. Can this be some bug in
>> the interpreter?
>>
>> Bart
>
> Ok so this is strange: I can fix the leak if I explicitly call
> PyObject_Free() on the leaking object which is created by "calling"
> its type. Is this normal? Shouldn't Py_DECREF() be enough? The
> relevant dealloc callback is called from Py_DECREF() but the object's
> memory is not freed.
>
> Bart

Ok I've found the problem and it's my fault. From tp_dealloc's documentation:

---
The destructor function should free all references which the instance
owns, free all memory buffers owned by the instance (using the freeing
function corresponding to the allocation function used to allocate the
buffer), and finally (as its last action) call the type’s tp_free
function.
---

I'm not calling the tp_free function...

Best regards,
Bartosz Golaszewski
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tracking a memory leak in C extension - interpreting the output of PYTHONMALLOCSTATS

2018-07-24 Thread Bartosz Golaszewski
2018-07-24 12:09 GMT+02:00 Bartosz Golaszewski :
> 2018-07-23 21:51 GMT+02:00 Thomas Jollans :
>> On 23/07/18 20:02, Bartosz Golaszewski wrote:
>>> Hi!
>>
>> Hey!
>>
>>> A user recently reported a memory leak in python bindings (C extension
>>> module) to a C library[1] I wrote. I've been trying to fix it since
>>> but so far without success. Since I'm probably dealing with a space
>>> leak rather than actual memory leak, valgrind didn't help much even
>>> when using malloc as allocator. I'm now trying to use
>>> PYTHONMALLOCSTATS but need some help on how to interpret the output
>>> emitted it's enabled.
>>
>> Oh dear.
>>
>>>
>>> [snip]
>>>
>>> The number of pools in arena 53 continuously grows. Its size column
>>> says: 432. I couldn't find any documentation on what it means but I
>>> assume it's an allocation of 432 bytes. [...]
>>
>> I had a quick look at the code (because what else does one do for fun);
>> I don't understand much, but what I can tell you is that
>>  (a) yes, that is an allocation size in bytes, and
>>  (b) as you can see, it uses intervals of 8. This means that pool 53
>>  is used for allocations of 424 < nbytes <= 432 bytes. Maybe your
>>  breakpoint needs tweaking.
>>  (c) Try breaking on _PyObject_Malloc or pymalloc_alloc. I think they're
>>  called by both PyMem_Malloc and PyObject_Malloc.
>>
>> int _PyObject_DebugMallocStats(FILE *out)
>>
>> https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L2435
>>
>> static int pymalloc_alloc(void *ctx, void **ptr_p, size_t nbytes)
>>
>> https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L1327
>>
>>
>> Have fun debugging!
>>
>> -- Thomas
>>
>>

[snip!]

>
> I don't see any other allocation of this size. Can this be some bug in
> the interpreter?
>
> Bart

Ok so this is strange: I can fix the leak if I explicitly call
PyObject_Free() on the leaking object which is created by "calling"
its type. Is this normal? Shouldn't Py_DECREF() be enough? The
relevant dealloc callback is called from Py_DECREF() but the object's
memory is not freed.

Bart
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tracking a memory leak in C extension - interpreting the output of PYTHONMALLOCSTATS

2018-07-24 Thread Bartosz Golaszewski
2018-07-23 21:51 GMT+02:00 Thomas Jollans :
> On 23/07/18 20:02, Bartosz Golaszewski wrote:
>> Hi!
>
> Hey!
>
>> A user recently reported a memory leak in python bindings (C extension
>> module) to a C library[1] I wrote. I've been trying to fix it since
>> but so far without success. Since I'm probably dealing with a space
>> leak rather than actual memory leak, valgrind didn't help much even
>> when using malloc as allocator. I'm now trying to use
>> PYTHONMALLOCSTATS but need some help on how to interpret the output
>> emitted it's enabled.
>
> Oh dear.
>
>>
>> [snip]
>>
>> The number of pools in arena 53 continuously grows. Its size column
>> says: 432. I couldn't find any documentation on what it means but I
>> assume it's an allocation of 432 bytes. [...]
>
> I had a quick look at the code (because what else does one do for fun);
> I don't understand much, but what I can tell you is that
>  (a) yes, that is an allocation size in bytes, and
>  (b) as you can see, it uses intervals of 8. This means that pool 53
>  is used for allocations of 424 < nbytes <= 432 bytes. Maybe your
>  breakpoint needs tweaking.
>  (c) Try breaking on _PyObject_Malloc or pymalloc_alloc. I think they're
>  called by both PyMem_Malloc and PyObject_Malloc.
>
> int _PyObject_DebugMallocStats(FILE *out)
>
> https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L2435
>
> static int pymalloc_alloc(void *ctx, void **ptr_p, size_t nbytes)
>
> https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L1327
>
>
> Have fun debugging!
>
> -- Thomas
>
>
>>
>> How do I use the info produced by PYTHONMALLOCSTATS do get to the
>> culprit of the leak? Is there anything wrong in my reasoning here?
>>
>> Best regards,
>> Bartosz Golaszewski
>>
>> [1] https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git/
>>
>
> --
> https://mail.python.org/mailman/listinfo/python-list

Thanks for the hints!

I've been able to pinpoint the allocation in question to this line:


https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git/tree/bindings/python/gpiodmodule.c?h=next#n1238

with the following stack trace:

#0  _PyObject_Malloc (ctx=0x0, nbytes=432) at Objects/obmalloc.c:1523
#1  0x55614c38 in _PyMem_DebugRawAlloc (ctx=0x55a3c340
<_PyMem_Debug+96>, nbytes=400, use_calloc=0) at
Objects/obmalloc.c:1998
#2  0x556238c5 in PyType_GenericAlloc (type=0x76e06820
, nitems=0) at Objects/typeobject.c:972
#3  0x55627ba5 in type_call (type=0x76e06820
, args=0x76e21910, kwds=0x0) at
Objects/typeobject.c:929
#4  0x555cc666 in PyObject_Call (kwargs=0x0, args=, callable=0x76e06820 )
at Objects/call.c:245
#5  PyEval_CallObjectWithKeywords (kwargs=0x0, args=,
callable=0x76e06820 ) at Objects/call.c:826
#6  PyObject_CallObject (callable=0x76e06820 ,
args=) at Objects/call.c:834
#7  0x76c008dd in gpiod_LineToLineBulk
(line=line@entry=0x75bbd240) at gpiodmodule.c:1238
#8  0x76c009af in gpiod_Line_set_value (self=0x75bbd240,
args=) at gpiodmodule.c:442
#9  0x555c9ef8 in _PyMethodDef_RawFastCallKeywords
(method=0x76e06280 ,
self=self@entry=0x75bbd240, args=args@entry=0x55b15e18,
nargs=nargs@entry=1, kwnames=kwnames@entry=0x0) at Objects/call.c:694
#10 0x55754db9 in _PyMethodDescr_FastCallKeywords
(descrobj=0x76e344d0, args=args@entry=0x55b15e10,
nargs=nargs@entry=2,
kwnames=kwnames@entry=0x0) at Objects/descrobject.c:288
#11 0x555b7fcd in call_function (kwnames=0x0, oparg=2,
pp_stack=) at Python/ceval.c:4581
#12 _PyEval_EvalFrameDefault (f=, throwflag=) at Python/ceval.c:3176
#13 0x55683b7c in PyEval_EvalFrameEx (throwflag=0,
f=0x55b15ca0) at Python/ceval.c:536
#14 _PyEval_EvalCodeWithName (_co=_co@entry=0x77e50460,
globals=globals@entry=0x77f550e8,
locals=locals@entry=0x77e50460,
args=args@entry=0x0, argcount=argcount@entry=0,
kwnames=kwnames@entry=0x0, kwargs=0x0, kwcount=0, kwstep=2, defs=0x0,
defcount=0,
kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at Python/ceval.c:3941
#15 0x55683ca3 in PyEval_EvalCodeEx (closure=0x0, kwdefs=0x0,
defcount=0, defs=0x0, kwcount=0, kws=0x0, argcount=0, args=0x0,
locals=locals@entry=0x77e50460,
globals=globals@entry=0x77f550e8, _co=_co@entry=0x77e50460) at
Python/ceval.c:3970
#16 PyEval_EvalCode (co=co@entry=0x77e50460,
globals=globals@entry=0x77efcc50,
locals=locals@entry=0x77efcc50)
at Python/ceval.c:513
#17 0x556bb099 in run_mod (arena=0x77f550e8,
flags=0x7fffe1a0, locals=0x77efcc50, globals=0x77efcc50

Re: Tracking a memory leak in C extension - interpreting the output of PYTHONMALLOCSTATS

2018-07-23 Thread Thomas Jollans
On 23/07/18 20:02, Bartosz Golaszewski wrote:
> Hi!

Hey!

> A user recently reported a memory leak in python bindings (C extension
> module) to a C library[1] I wrote. I've been trying to fix it since
> but so far without success. Since I'm probably dealing with a space
> leak rather than actual memory leak, valgrind didn't help much even
> when using malloc as allocator. I'm now trying to use
> PYTHONMALLOCSTATS but need some help on how to interpret the output
> emitted it's enabled.

Oh dear.

> 
> [snip]
> 
> The number of pools in arena 53 continuously grows. Its size column
> says: 432. I couldn't find any documentation on what it means but I
> assume it's an allocation of 432 bytes. [...]

I had a quick look at the code (because what else does one do for fun);
I don't understand much, but what I can tell you is that
 (a) yes, that is an allocation size in bytes, and
 (b) as you can see, it uses intervals of 8. This means that pool 53
 is used for allocations of 424 < nbytes <= 432 bytes. Maybe your
 breakpoint needs tweaking.
 (c) Try breaking on _PyObject_Malloc or pymalloc_alloc. I think they're
 called by both PyMem_Malloc and PyObject_Malloc.

int _PyObject_DebugMallocStats(FILE *out)

https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L2435

static int pymalloc_alloc(void *ctx, void **ptr_p, size_t nbytes)

https://github.com/python/cpython/blob/b18f8bc1a77193c372d79afa79b284028a2842d7/Objects/obmalloc.c#L1327


Have fun debugging!

-- Thomas


> 
> How do I use the info produced by PYTHONMALLOCSTATS do get to the
> culprit of the leak? Is there anything wrong in my reasoning here?
> 
> Best regards,
> Bartosz Golaszewski
> 
> [1] https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git/
> 

-- 
https://mail.python.org/mailman/listinfo/python-list


Tracking a memory leak in C extension - interpreting the output of PYTHONMALLOCSTATS

2018-07-23 Thread Bartosz Golaszewski
Hi!

A user recently reported a memory leak in python bindings (C extension
module) to a C library[1] I wrote. I've been trying to fix it since
but so far without success. Since I'm probably dealing with a space
leak rather than actual memory leak, valgrind didn't help much even
when using malloc as allocator. I'm now trying to use
PYTHONMALLOCSTATS but need some help on how to interpret the output
emitted it's enabled.

I'm setting PYTHONMALLOCSTATS=1 & PYTHONMALLOC=pymalloc_debug and then
running the script that triggers the leak. The last debug message is
as follows:

class   size   num pools   blocks in use  avail blocks
-      -   -  
3 32   1   2   124
4 40   2   9   193
5 48   1   381
6 56   1   468
7 64  11 295   398
8 72   9 260   244
9 80  36 831   969
   10 88  791542  2092
   11 96 1313262  2240
   12104  701903   757
   13112  19 289   395
   14120  11 139   224
   15128   7  88   129
   16136   6  70   104
   17144   5  4496
   18152   4  4757
   19160  24 342   258
   20168   4  1779
   21176  24 360   192
   22184   2   836
   23192   2  1131
   24200  22 227   213
   25208   3  1344
   26216   3   747
   27224   2  1323
   28232   2   628
   29240   3   840
   30248   2  1022
   31256   3  1035
   32264   2   921
   33272   3  1131
   34280   2  1018
   35288   1   311
   36296   2   917
   37304   2   917
   38312   2   519
   39320   2   519
   40328  14 10563
   41336   2   321
   42344   1   3 8
   43352   1   3 8
   44360   2   319
   45368   1   3 8
   46376   1   1 9
   47384   2   416
   48392   2   614
   49400   2   317
   50408   1   1 8
   51416   1   3 6
   52424   2   414
   53432   50967  45868023
   54440   3   918
   55448   4  1521
   56456   4  1220
   57464   3   816
   58472   2   511
   59480   1   4 4
   60488   1   3 5
   61496   4  1121
   62504   4  1319
   63512   2   7 7

# times object malloc called   =2,811,245
# arenas allocated total   =  810
# arenas reclaimed =0
# arenas highwater mark=  810
# arenas allocated current =  810
810 arenas * 262144 bytes/arena=  212,336,640

# bytes in allocated blocks=  199,277,432
# bytes in available blocks=1,138,472
308 unused pools * 4096 bytes  =1,261,568
# bytes lost to pool headers   =2,473,536
# bytes lost to quantization   =8,185,632
# bytes lost to arena alignment=0
Total  =  212,336,640

The number of pools in arena 53 continuously grows. Its size column
says: 432. I couldn't find any documentation on what it means but I
assume it's an all

Re: memory leak with re.match

2017-07-05 Thread Peter Otten
Mayling ge wrote:

> Sorry. The code  here is just  to describe  the issue and  is just  pseudo
> code, 

That is the problem with your post. It's too vague for us to make sense of 
it.

Can you provide a minimal example that shows what you think is a "memory 
leak"? Then we can either help you avoid storing extra stuff or confirm an 
actual leak and help you prepare a bug report.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: memory leak with re.match

2017-07-05 Thread Mayling ge
   Sorry. The code  here is just  to describe  the issue and  is just  pseudo
   code, please  forgive some  typo. I  list out  lines because  I need  line
   context.
   Sent from Mail Master
   On 07/05/2017 15:52, [1]Albert-Jan Roskam wrote:

 From: Python-list
  on behalf of
 Mayling ge 
 Sent: Tuesday, July 4, 2017 9:01 AM
 To: python-list
 Subject: memory leak with re.match

Hi,

My function is in the following way to handle file line by line.
 There are
multiple error patterns  defined and  need to apply  to each  line.
 I  use
multiprocessing.Pool to handle the file in block.

The memory usage increases to 2G for a 1G file. And stays in 2G even
 after
the file processing. File closed in the end.

If I comment  out the  call to re_pat.match,  memory usage  is
 normal  and
keeps under 100Mb.

am I using re in a wrong way? I cannot figure out a way to fix the
 memory
leak. And I googled .

def line_match(lines, errors)

   

lines = list(itertools.islice(fo, line_per_proc))

 ===> do you really need to listify the iterator?
if not lines:

break

result = p.apply_async(line_match, args=(errors, lines))

 ===> the signature of line_match is (lines, errors), in args you do
 (errors, lines)

References

   Visible links
   1. mailto:sjeik_ap...@hotmail.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: memory leak with re.match

2017-07-05 Thread Albert-Jan Roskam
From: Python-list  on 
behalf of Mayling ge 
Sent: Tuesday, July 4, 2017 9:01 AM
To: python-list
Subject: memory leak with re.match
    
   Hi,

   My function is in the following way to handle file line by line. There are
   multiple error patterns  defined and  need to apply  to each  line. I  use
   multiprocessing.Pool to handle the file in block.

   The memory usage increases to 2G for a 1G file. And stays in 2G even after
   the file processing. File closed in the end.

   If I comment  out the  call to re_pat.match,  memory usage  is normal  and
   keeps under 100Mb.

   am I using re in a wrong way? I cannot figure out a way to fix the  memory
   leak. And I googled .

   def line_match(lines, errors)

  

   lines = list(itertools.islice(fo, line_per_proc))

===> do you really need to listify the iterator?
   if not lines:

   break

   result = p.apply_async(line_match, args=(errors, lines))

===> the signature of line_match is (lines, errors), in args you do (errors, 
lines)

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: memory leak with re.match

2017-07-05 Thread Mayling ge
   Thanks. I actually comment out all  handling code. The loop ends with  the
   re_pat.match and nothing followed.
   Sent from Mail Master
   On 07/05/2017 08:31, [1]Cameron Simpson wrote:

 On 04Jul2017 17:01, Mayling ge  wrote:
 >   My function is in the following way to handle file line by line.
 There are
 >   multiple error patterns  defined and  need to apply  to each  line.
 I  use
 >   multiprocessing.Pool to handle the file in block.
 >
 >   The memory usage increases to 2G for a 1G file. And stays in 2G even
 after
 >   the file processing. File closed in the end.
 >
 >   If I comment  out the  call to re_pat.match,  memory usage  is
 normal  and
 >   keeps under 100Mb. [...]
 >
 >   def line_match(lines, errors)
 >   for error in errors:
 >   try:
 >   re_pat = re.compile(error['pattern'])
 >   except Exception:
 >   print_error
 >   continue
 >   for line in lines:
 >   m = re_pat.match(line)
 >   # other code to handle matched object
 [...]
 >   Notes: I  omit  some  code  as  I  think  the  significant
  difference  is
 >   with/without re_pat.match(...)

 Hmm. Does the handling code (omitted) keep the line or match object in
 memory?

 If leaving out the "m = re_pat.match(line)" triggers the leak, and
 presuming
 that line itself doesn't leak, then I would start to suspect the
 handling code
 is not letting go of the match object "m" or of the line (which is
 probably
 attached to the match object "m" to support things like m.group() and so
 forth).

 So you might need to show us the handling code.

 Cheers,
 Cameron Simpson 

References

   Visible links
   1. mailto:c...@zip.com.au
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: memory leak with re.match

2017-07-04 Thread Cameron Simpson

On 04Jul2017 17:01, Mayling ge  wrote:

  My function is in the following way to handle file line by line. There are
  multiple error patterns  defined and  need to apply  to each  line. I  use
  multiprocessing.Pool to handle the file in block.

  The memory usage increases to 2G for a 1G file. And stays in 2G even after
  the file processing. File closed in the end.

  If I comment  out the  call to re_pat.match,  memory usage  is normal  and
  keeps under 100Mb. [...]

  def line_match(lines, errors)
  for error in errors:
  try:
  re_pat = re.compile(error['pattern'])
  except Exception:
  print_error
  continue
  for line in lines:
  m = re_pat.match(line)
  # other code to handle matched object

[...]

  Notes: I  omit  some  code  as  I  think  the  significant  difference  is
  with/without re_pat.match(...)


Hmm. Does the handling code (omitted) keep the line or match object in memory?

If leaving out the "m = re_pat.match(line)" triggers the leak, and presuming 
that line itself doesn't leak, then I would start to suspect the handling code 
is not letting go of the match object "m" or of the line (which is probably 
attached to the match object "m" to support things like m.group() and so 
forth).


So you might need to show us the handling code.

Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


memory leak with re.match

2017-07-04 Thread Mayling ge
   Hi,

   My function is in the following way to handle file line by line. There are
   multiple error patterns  defined and  need to apply  to each  line. I  use
   multiprocessing.Pool to handle the file in block.

   The memory usage increases to 2G for a 1G file. And stays in 2G even after
   the file processing. File closed in the end.

   If I comment  out the  call to re_pat.match,  memory usage  is normal  and
   keeps under 100Mb.

   am I using re in a wrong way? I cannot figure out a way to fix the  memory
   leak. And I googled .

   def line_match(lines, errors)

   for error in errors:

   try:

   re_pat = re.compile(error['pattern'])

   except Exception:

   print_error

   continue



   for line in lines:

   m = re_pat.match(line)

   # other code to handle matched object







   def process_large_file(fo):

   p = multiprocessing.Pool()

   while True:

   lines = list(itertools.islice(fo, line_per_proc))

   if not lines:

   break

   result = p.apply_async(line_match, args=(errors, lines))

   Notes: I  omit  some  code  as  I  think  the  significant  difference  is
   with/without re_pat.match(...)





   Regards,

   -Meiling
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: help with memory leak

2014-05-27 Thread Chris Angelico
On Wed, May 28, 2014 at 5:56 AM, Neal Becker  wrote:
> I'm trying to track down a memory leak in a fairly large code.  It uses a lot 
> of
> numpy, and a bit of c++-wrapped code.  I don't yet know if the leak is purely
> python or is caused by the c++ modules.

Something to try, which would separate the two types of leak: Run your
program in a separate namespace of some sort (eg a function), make
sure all your globals have been cleaned up, run a gc collection, and
then see if you still have a whole lot more junk around. If that
cleans everything up, it's some sort of refloop; if it doesn't, it's
either a global you didn't find, or a C-level refleak.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


help with memory leak

2014-05-27 Thread Neal Becker
I'm trying to track down a memory leak in a fairly large code.  It uses a lot 
of 
numpy, and a bit of c++-wrapped code.  I don't yet know if the leak is purely 
python or is caused by the c++ modules.

At each iteration of the main loop, I call gc.collect()
If I then look at gc.garbage, it is empty.

I've tried using objgraph.  I don't know how to interpret the result.  I don't 
know if this is the main leakage, but I see that each iteration there are more
'Burst' objects.  If I look at backrefs to them using this code:

   for frame in count(1): ## main loop starts here
gc.collect()
objs = objgraph.by_type('Burst')
print(objs)
if len (objs) != 0:
print(objs[0], gc.is_tracked (objs[0]))
objgraph.show_backrefs(objs[0], max_depth=10, refcounts=True)

I will get a graph like that attached

A couple of strange things.

The refcounts (9) of the Burst object don't match the number of arrows into it.
There are 2 lists with 0 refs.  Why weren't they collected?-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Number of objects grows unbouned...Memory leak

2014-05-03 Thread ptb
Turns out one of the libraries I am using has a cache system.  If I shut if off 
then my problem goes away...

On Saturday, May 3, 2014 7:15:59 AM UTC-6, ptb wrote:
> Hello all,
> 
> 
> 
> I'm using Python 3.4 and am seeing the memory usage of my program grow 
> unbounded.  Here's a snippet of the loop driving the main computation
> 
> 
> 
> opt_dict = {'interior':cons_dict['int_eq'],'lboundary':cons_dict['lboundary'],
> 
> 'rboundary':cons_dict['rboundary'],
> 
> 'material_props':{'conv':0.9,'diff':0.01},
> 
> 'file_ident':ident,'numeric':True,'file_set':files}
> 
> 
> 
> # this produces roughly 25,000 elements
> 
> args = product(zip(repeat(nx[-1]),ib_frac),nx,subs)
> 
> 
> 
> for i,arg in enumerate(args):
> 
> my_func(a=arg[0],b=arg[1],c=arg[2],**opt_dict)
> 
> gc.collect()
> 
> print(i,len(gc.get_objects()))
> 
> 
> 
> A few lines of output:
> 
> 
> 
> progress
> 
> 0 84883
> 
> 1 95842
> 
> 2 106655
> 
> 3 117576
> 
> 4 128444
> 
> 5 139309
> 
> 6 150172
> 
> 7 161015
> 
> 8 171886
> 
> 9 182739
> 
> 10 193593
> 
> 11 204455
> 
> 12 215284
> 
> 13 226102
> 
> 14 236922
> 
> 15 247804
> 
> 16 258567
> 
> 17 269386
> 
> 18 280213
> 
> 19 291032
> 
> 20 301892
> 
> 21 312701
> 
> 22 323536
> 
> 23 334391
> 
> 24 345239
> 
> 25 356076
> 
> 26 366923
> 
> 27 377701
> 
> 28 388532
> 
> 29 399321
> 
> 30 410127
> 
> 31 420917
> 
> 32 431732
> 
> 33 442489
> 
> 34 453320
> 
> 35 464147
> 
> 36 475071
> 
> 37 485593
> 
> 38 496068
> 
> 39 506568
> 
> 40 517040
> 
> 41 527531
> 
> 42 538099
> 
> 43 548658
> 
> 44 559205
> 
> 45 569732
> 
> 46 580214
> 
> 47 590655
> 
> 48 601165
> 
> 49 611656
> 
> 50 622179
> 
> 51 632645
> 
> 52 643186
> 
> 53 653654
> 
> 54 664146
> 
> ...
> 
> 
> 
> As you can see the number of objects keep growing and my memory usage grows 
> proportionately.  Also, my_func doesn't return any values but simply writes 
> data to a file.
> 
> 
> 
> I was under the impression that this sort of thing couldn't happen in Python. 
>  Can someone explain (1) how this is possible? and (2) how do I fix it?
> 
> 
> 
> Hopefully that's enough information.
> 
> 
> 
> Thanks for your help,
> 
> Peter

-- 
https://mail.python.org/mailman/listinfo/python-list


Number of objects grows unbouned...Memory leak

2014-05-03 Thread ptb
Hello all,

I'm using Python 3.4 and am seeing the memory usage of my program grow 
unbounded.  Here's a snippet of the loop driving the main computation

opt_dict = {'interior':cons_dict['int_eq'],'lboundary':cons_dict['lboundary'],
'rboundary':cons_dict['rboundary'],
'material_props':{'conv':0.9,'diff':0.01},
'file_ident':ident,'numeric':True,'file_set':files}

# this produces roughly 25,000 elements
args = product(zip(repeat(nx[-1]),ib_frac),nx,subs)

for i,arg in enumerate(args):
my_func(a=arg[0],b=arg[1],c=arg[2],**opt_dict)
gc.collect()
print(i,len(gc.get_objects()))

A few lines of output:

progress
0 84883
1 95842
2 106655
3 117576
4 128444
5 139309
6 150172
7 161015
8 171886
9 182739
10 193593
11 204455
12 215284
13 226102
14 236922
15 247804
16 258567
17 269386
18 280213
19 291032
20 301892
21 312701
22 323536
23 334391
24 345239
25 356076
26 366923
27 377701
28 388532
29 399321
30 410127
31 420917
32 431732
33 442489
34 453320
35 464147
36 475071
37 485593
38 496068
39 506568
40 517040
41 527531
42 538099
43 548658
44 559205
45 569732
46 580214
47 590655
48 601165
49 611656
50 622179
51 632645
52 643186
53 653654
54 664146
...

As you can see the number of objects keep growing and my memory usage grows 
proportionately.  Also, my_func doesn't return any values but simply writes 
data to a file.

I was under the impression that this sort of thing couldn't happen in Python.  
Can someone explain (1) how this is possible? and (2) how do I fix it?

Hopefully that's enough information.

Thanks for your help,
Peter
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 2.7.3, C++ embed memory leak?

2012-06-02 Thread Qi

On 2012-6-2 18:53, Diez B. Roggisch wrote:

Python does some special things that confuse valgrind. Don't bother.

http://svn.python.org/projects/python/trunk/Misc/README.valgrind


Thanks for the link.
It clears a lot of my confusing, such as uninitialized reading...


--
WQ
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.7.3, C++ embed memory leak?

2012-06-02 Thread Diez B. Roggisch
Qi  writes:

> Hi guys,
>
> Is there any known memory leak problems, when embed Python 2.7.3
> in C++?
> I Googled but only found some old posts.
>
> I tried to only call Py_Initialize() and Py_Finalize(), nothing else
> between those functions, Valgrind still reports memory leaks
> on Ubuntu?
>
> Is that a know problem? Did Python 3.x solve it?
>
> I want some confirmation.

Python does some special things that confuse valgrind. Don't bother.

http://svn.python.org/projects/python/trunk/Misc/README.valgrind

Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.7.3, C++ embed memory leak?

2012-05-29 Thread Qi

On 2012-5-29 23:29, Ulrich Eckhardt wrote:


Call the pair of functions twice, if the reported memory leak doesn't
increase, there is no problem. I personally wouldn't even call this a
leak then, but that depends a bit on the precise definition.


I should still call it a memory leak though it seems less harmful.
And it causes trouble that I have difficulty to distinguish if
the leaks are from Python or from my binding code, if I add binding
between that pair of functions.


--
WQ
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.7.3, C++ embed memory leak?

2012-05-29 Thread Ulrich Eckhardt
Am 29.05.2012 16:37, schrieb Qi:
> I tried to only call Py_Initialize() and Py_Finalize(), nothing else
> between those functions, Valgrind still reports memory leaks
> on Ubuntu?

Call the pair of functions twice, if the reported memory leak doesn't
increase, there is no problem. I personally wouldn't even call this a
leak then, but that depends a bit on the precise definition.

Uli

-- 
http://mail.python.org/mailman/listinfo/python-list


Python 2.7.3, C++ embed memory leak?

2012-05-29 Thread Qi

Hi guys,

Is there any known memory leak problems, when embed Python 2.7.3
in C++?
I Googled but only found some old posts.

I tried to only call Py_Initialize() and Py_Finalize(), nothing else
between those functions, Valgrind still reports memory leaks
on Ubuntu?

Is that a know problem? Did Python 3.x solve it?

I want some confirmation.


Thanks


--
WQ
--
http://mail.python.org/mailman/listinfo/python-list


Re: tiny script has memory leak

2012-05-17 Thread Terry Reedy

On 5/17/2012 5:50 AM, Alain Ketterlin wrote:

gry  writes:


sys.version -->  '2.6 (r26:66714, Feb 21 2009, 02:16:04) \n[GCC 4.3.2
[gcc-4_3-branch revision 141291]]



I thought this script would be very lean and fast, but with a large
value for n (like 15), it uses 26G of virtural memory, and things
start to crumble.

#!/usr/bin/env python
'''write a file of random integers.  args are: file-name how-many'''
import sys, random

f = open(sys.argv[1], 'w')
n = int(sys.argv[2])
for i in xrange(n):
 print>>f, random.randint(0, sys.maxint)
f.close()


sys.version is '2.6.6 (r266:84292, Sep 15 2010, 16:22:56) \n[GCC 4.4.5]'
here, and your script works like a charm. BTW, I would use f.write()


That would have to be f.write(str(random.randint(0, sys.maxint))+end) 
where above end would be '\n'.



instead of print>>  f (which I think is deprecated).


In the sense that in Py3, print is a function with a file parameter:

print(random.randint(0, sys.maxint), file=f)

The idiosyncratic ugliness of >>file was one reason for the change. 
Adding the option to specify separator and terminator was another.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: tiny script has memory leak

2012-05-17 Thread Alain Ketterlin
gry  writes:

> sys.version --> '2.6 (r26:66714, Feb 21 2009, 02:16:04) \n[GCC 4.3.2
> [gcc-4_3-branch revision 141291]]

> I thought this script would be very lean and fast, but with a large
> value for n (like 15), it uses 26G of virtural memory, and things
> start to crumble.
>
> #!/usr/bin/env python
> '''write a file of random integers.  args are: file-name how-many'''
> import sys, random
>
> f = open(sys.argv[1], 'w')
> n = int(sys.argv[2])
> for i in xrange(n):
> print >>f, random.randint(0, sys.maxint)
> f.close()

sys.version is '2.6.6 (r266:84292, Sep 15 2010, 16:22:56) \n[GCC 4.4.5]'
here, and your script works like a charm. BTW, I would use f.write()
instead of print >> f (which I think is deprecated).

-- Alain.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: tiny script has memory leak

2012-05-17 Thread Iain King
On Friday, 11 May 2012 22:29:39 UTC+1, gry  wrote:
> sys.version --> '2.6 (r26:66714, Feb 21 2009, 02:16:04) \n[GCC 4.3.2
> [gcc-4_3-branch revision 141291]]
> I thought this script would be very lean and fast, but with a large
> value for n (like 15), it uses 26G of virtural memory, and things
> start to crumble.
> 
> #!/usr/bin/env python
> '''write a file of random integers.  args are: file-name how-many'''
> import sys, random
> 
> f = open(sys.argv[1], 'w')
> n = int(sys.argv[2])
> for i in xrange(n):
> print >>f, random.randint(0, sys.maxint)
> f.close()
> 
> What's using so much memory?
> What would be a better way to do this?  (aside from checking arg
> values and types, I know...)

Ran OK for me, python 2.4.1 on Windows 7

Iain
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: tiny script has memory leak

2012-05-14 Thread Chris Angelico
On Sat, May 12, 2012 at 7:29 AM, gry  wrote:
> f = open(sys.argv[1], 'w')

What are you passing as the file name argument? Could that device be
the cause of your memory usage spike?

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: tiny script has memory leak

2012-05-14 Thread Ian Kelly
On Fri, May 11, 2012 at 3:29 PM, gry  wrote:
> sys.version --> '2.6 (r26:66714, Feb 21 2009, 02:16:04) \n[GCC 4.3.2
> [gcc-4_3-branch revision 141291]]
> I thought this script would be very lean and fast, but with a large
> value for n (like 15), it uses 26G of virtural memory, and things
> start to crumble.
>
> #!/usr/bin/env python
> '''write a file of random integers.  args are: file-name how-many'''
> import sys, random
>
> f = open(sys.argv[1], 'w')
> n = int(sys.argv[2])
> for i in xrange(n):
>    print >>f, random.randint(0, sys.maxint)
> f.close()
>
> What's using so much memory?

I don't know, I'm not able to replicate the problem you're reporting.
When I try your script with a value of 15, it runs in under a
second and does not appear to consume any more virtual memory than
what is normally used by the Python interpreter.  I suspect there is
something else at play here.

> What would be a better way to do this?  (aside from checking arg
> values and types, I know...)

I don't see anything wrong with the way you're currently doing it,
assuming you can solve your memory leak issue.

Ian
-- 
http://mail.python.org/mailman/listinfo/python-list


tiny script has memory leak

2012-05-14 Thread gry
sys.version --> '2.6 (r26:66714, Feb 21 2009, 02:16:04) \n[GCC 4.3.2
[gcc-4_3-branch revision 141291]]
I thought this script would be very lean and fast, but with a large
value for n (like 15), it uses 26G of virtural memory, and things
start to crumble.

#!/usr/bin/env python
'''write a file of random integers.  args are: file-name how-many'''
import sys, random

f = open(sys.argv[1], 'w')
n = int(sys.argv[2])
for i in xrange(n):
print >>f, random.randint(0, sys.maxint)
f.close()

What's using so much memory?
What would be a better way to do this?  (aside from checking arg
values and types, I know...)
-- 
http://mail.python.org/mailman/listinfo/python-list


Memory leak involving traceback objects

2012-03-08 Thread Ran Harel
I have the same problem with python 2.6.2.
I have upgraded to 2.7.1 and the leak is gone.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Improper creating of logger instances or a Memory Leak?

2011-06-20 Thread Vinay Sajip
On Jun 20, 3:50 pm, foobar  wrote:

> Regarding adding a new logger for each thread - each thread represents
> a telephone call in a data collection system. I need to be able to
> cleanly provided call-loggingfor debugging to my programmers as well
> as dataloggingand verification; having a single log file is somewhat
> impractical.  To use theloggingfiltering then I would have to be
> dynamically adding to the filtering hierarchy continuously, no?
>

You could, for example, have a different *handler* for each thread.
There are a number of possibilities according to exactly what you want
to do, but there's certainly no need to create one *logger* per
thread.

Regards,

Vinay Sajip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Improper creating of logger instances or a Memory Leak?

2011-06-20 Thread foobar
Yes, I asked it on stack overflow first and didn't see an quick
reply.  I'm trying to tighten up this code as much as possible in a
final pre-production push; I apologize for being overly antsy about
this.  This is my pet project to upgrade our core systems from an
ancient IBM language that Moses might have used.

Currently I'm using python 3.1.2 (sorry for the obvious omission).

Regarding adding a new logger for each thread - each thread represents
a telephone call in a data collection system. I need to be able to
cleanly provided call-logging for debugging to my programmers as well
as data logging and verification; having a single log file is somewhat
impractical.  To use the logging filtering then I would have to be
dynamically adding to the filtering hierarchy continuously, no?

Thanks!
Bill



On Jun 19, 10:42 am, Vinay Sajip  wrote:
> foobar  gmail.com> writes:
>
>
>
> > I've run across a memory leak in a long running process which I can't
> > determine if its my issue or if its the logger.
>
> BTW did you also ask this question on Stack Overflow? I've answered there, 
> too.
>
> http://stackoverflow.com/questions/6388514/
>
> Regards,
>
> Vinay Sajip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Improper creating of logger instances or a Memory Leak?

2011-06-19 Thread Vinay Sajip
foobar  gmail.com> writes:

> 
> I've run across a memory leak in a long running process which I can't
> determine if its my issue or if its the logger.
> 

BTW did you also ask this question on Stack Overflow? I've answered there, too.

http://stackoverflow.com/questions/6388514/

Regards,

Vinay Sajip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Improper creating of logger instances or a Memory Leak?

2011-06-19 Thread Vinay Sajip
foobar  gmail.com> writes:

> I've run across a memory leak in a long running process which I can't
> determine if its my issue or if its the logger.

As Chris Torek said, it's not a good idea to create a logger for each thread. A
logger name represents a place in your application; typically, a module, or
perhaps some part of a module. If you want to include information in the log to
see what different threads are doing, do that using the information provided
here:

http://docs.python.org/howto/logging-cookbook.html#adding-contextual-information-to-your-logging-output

Regards,

Vinay Sajip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Improper creating of logger instances or a Memory Leak?

2011-06-18 Thread Chris Torek
In article 
foobar   wrote:
>I've run across a memory leak in a long running process which I can't
>determine if its my issue or if its the logger.

You do not say what version of python you are using, but on the
other hand I do not know how much the logger code has evolved
over time anyway. :-)

> Each application thread gets a logger instance in it's init() method
>via:
>
>self.logger = logging.getLogger('ivr-'+str(self.rand))
>
>where self.rand is a suitably large random number to avoid collisions
>of the log file's name.

This instance will "live forever" (since the thread shares the
main logging manager with all other threads).
-
class Manager:
"""
There is [under normal circumstances] just one Manager instance, which
holds the hierarchy of loggers.
"""
def __init__(self, rootnode):
"""
Initialize the manager with the root node of the logger hierarchy.
"""
[snip]
self.loggerDict = {}

def getLogger(self, name):
"""
Get a logger with the specified name (channel name), creating it
if it doesn't yet exist. This name is a dot-separated hierarchical
name, such as "a", "a.b", "a.b.c" or similar.

If a PlaceHolder existed for the specified name [i.e. the logger
didn't exist but a child of it did], replace it with the created
logger and fix up the parent/child references which pointed to the
placeholder to now point to the logger.
"""
[snip]
self.loggerDict[name] = rv
[snip]
[snip]
Logger.manager = Manager(Logger.root)
-

So you will find all the various ivr-* loggers in
logging.Logger.manager.loggerDict[].

>finally the last statements in the run() method are:
>
>filehandler.close()
>self.logger.removeHandler(filehandler)
>del self.logger #this was added to try and force a clean up of
>the logger instances.

There appears to be no __del__ handler and nothing that allows
removing a logger instance from the manager's loggerDict.  Of
course you could do this "manually", e.g.:

...
self.logger.removeHandler(filehandler)
del logging.Logger.manager.loggerDict[self.logger.name]
del self.logger # optional

I am curious as to why you create a new logger for each thread.
The logging module has thread synchronization in it, so that you
can share one log (or several logs) amongst all threads, which is
more typically what one wants.
-- 
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W)  +1 801 277 2603
email: gmail (figure it out)  http://web.torek.net/torek/index.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Improper creating of logger instances or a Memory Leak?

2011-06-18 Thread foobar
I've run across a memory leak in a long running process which I can't
determine if its my issue or if its the logger.

The long and short is I'm doing load testing on an application server
which spawns handlers threads which in turn each spawn a single
application thread. A graphic representation would be One Server ->
(to many pairs of) [ Handler <-> Application ].

 Each application thread gets a logger instance in it's init() method
via:

self.logger = logging.getLogger('ivr-'+str(self.rand))

where self.rand is a suitably large random number to avoid collisions
of the log file's name.  Until the log file gets created I attach am
memory handler

self.memhandler =
logging.handlers.MemoryHandler(1000)
self.memhandler.setLevel(10)
formatter   = logging.Formatter('%
(levelname)s %(message)s')
self.memhandler.setFormatter(formatter)
self.logger.addHandler(self.memhandler)

when the application thread formally starts with the run() method I
create the log file and terminate the memory handler

filehandler = logging.FileHandler(logfilename)
filehandler.setLevel(10)
formatter = logging.Formatter('%(levelname)s %(message)s')
filehandler.setFormatter(formatter)

self.memhandler.setTarget(filehandler)
self.memhandler.close()
self.logger.removeHandler(self.memhandler)
self.logger.addHandler(filehandler)


finally the last statements in the run() method are:

filehandler.close()
self.logger.removeHandler(filehandler)
del self.logger #this was added to try and force a clean up of
the logger instances.

Using the objgraph to look at the objects in memory I find the number
of logger instances equal to the total number of threads to have lived
despite the fact that either a) there are only the standard load
testing number of threads alive, 35 or b) that there no threads
running nor are there any stale waiting for the GC.

>From objgraph a selection of the most prevalent objects in memory are
(this is with the system idle post-run):

list 256730
dict128933
Logger  128164# total
application threads executed running load testing.
function 2356
wrapper_descriptor   1028
builtin_function_or_method702
method_descriptor 648
tuple   643
weakref  629
getset_descriptor   304
type252
set  224
member_descriptor209
module   128
WeakSet102


The only references to self.logger other than those listed are wrapper
methods defined in the application thread to wrap up the log / debug
methods.  Any help or direction would be much appreciated.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCObject & malloc creating memory leak

2010-09-30 Thread Antoine Pitrou
On Thu, 30 Sep 2010 04:06:03 -0700 (PDT)
Tom Conneely  wrote:
> Thanks for your reply, you've given me plenty to think about
> 
> On Sep 29, 11:51 pm, Antoine Pitrou  wrote:
> >
> > > My original plan was to have the data processing and data acquisition
> > > functions running in separate processes, with a multiprocessing.Queue
> > > for passing the raw data packets. The raw data is read in as a char*,
> > > with a non constant length, hence I have allocated memory using
> > > PyMem_Malloc and I am returning from the acquisition function a
> > > PyCObject containing a pointer to this char* buffer, along with a
> > > destructor.
> >
> > That sounds overkill, and I also wonder how you plan to pass that
> > object in a multiprocessing Queue (which relies on objects being
> > pickleable). Why don't you simply create a PyString object instead?
> 
> Could you elaborate on why you feel this is overkill? Also, your right
> about
> passing the PyCObjects through a Queue, something which I hadn't
> really
> considered, so I've switched to using python strings as you
> suggested,
> an overhead I hoped to avoid but you can't win them all I suppose.

Well, there should be no overhead. Actually, a string should be cheaper
since:
- the string contents are allocated inline with the PyObject header
- while your PyCObject contents were allocated separately (two
  allocations rather than one)

Regards

Antoine.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCObject & malloc creating memory leak

2010-09-30 Thread Tom Conneely
I'm posting this last message as I've found the source of my initial
memory leak problem, unfortunately it was an embarrassingly basic
mistake. In my defence I've got a horrible cold, but I'm just making
excuses.

I begin by mallocing the memory, which gives me a pointer "foo" to
that memory:
char *foo = PyMem_Malloc(1024 * sizeof(char));

then assign a value to it:
foo = "foo";

of course what this actually does is change the pointer to point to a
new memory address containing a constant "foo". Hence, when I free the
memory in the PYCObject's destructor, the pointer is for the constant
"foo", not the memory I initially allocated.

I only posted this to help people searching, sorry for the noise.

Tom Conneely
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCObject & malloc creating memory leak

2010-09-30 Thread Tom Conneely
Thanks for your reply, you've given me plenty to think about

On Sep 29, 11:51 pm, Antoine Pitrou  wrote:
>
> > My original plan was to have the data processing and data acquisition
> > functions running in separate processes, with a multiprocessing.Queue
> > for passing the raw data packets. The raw data is read in as a char*,
> > with a non constant length, hence I have allocated memory using
> > PyMem_Malloc and I am returning from the acquisition function a
> > PyCObject containing a pointer to this char* buffer, along with a
> > destructor.
>
> That sounds overkill, and I also wonder how you plan to pass that
> object in a multiprocessing Queue (which relies on objects being
> pickleable). Why don't you simply create a PyString object instead?

Could you elaborate on why you feel this is overkill? Also, your right
about
passing the PyCObjects through a Queue, something which I hadn't
really
considered, so I've switched to using python strings as you
suggested,
an overhead I hoped to avoid but you can't win them all I suppose.

> > So if I call these functions in a loop, e.g. The following will
> > generate ~10GB of data
>
> >     x = MyClass()
> >     for i in xrange(0, 10 * 2**20):
> >         c = x.malloc_buffer()
> >         x.retrieve_buffer(c)
>
> > All my memory disapears, until python crashes with a MemoryError. By
> > placing a print in the destructor function I know it's being called,
> > however it's not actually freeing the memory. So in short, what am I
> > doing wrong?
>
> Python returns memory to the OS by calling free(). Not all OSes
> actually relinquish memory when free() is called; some will simply set
> it aside for the next allocation.
> Another possible (and related) issue is memory fragmentation. Again, it
> depends on the memory allocator.

Yes, I know that's the case but the "freed" memory should be used for
the
next allocation, or atleast at some point before python runs out of
memory.
Anyway, this is besides the point as I've switched to using strings.

Again thanks for taking the time to help me out,
Tom Conneely

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCObject & malloc creating memory leak

2010-09-29 Thread Antoine Pitrou
On Wed, 29 Sep 2010 06:50:05 -0700 (PDT)
Tom Conneely  wrote:
> 
> My original plan was to have the data processing and data acquisition
> functions running in separate processes, with a multiprocessing.Queue
> for passing the raw data packets. The raw data is read in as a char*,
> with a non constant length, hence I have allocated memory using
> PyMem_Malloc and I am returning from the acquisition function a
> PyCObject containing a pointer to this char* buffer, along with a
> destructor.

That sounds overkill, and I also wonder how you plan to pass that
object in a multiprocessing Queue (which relies on objects being
pickleable). Why don't you simply create a PyString object instead?

> So if I call these functions in a loop, e.g. The following will
> generate ~10GB of data
> 
> x = MyClass()
> for i in xrange(0, 10 * 2**20):
> c = x.malloc_buffer()
> x.retrieve_buffer(c)
> 
> All my memory disapears, until python crashes with a MemoryError. By
> placing a print in the destructor function I know it's being called,
> however it's not actually freeing the memory. So in short, what am I
> doing wrong?

Python returns memory to the OS by calling free(). Not all OSes
actually relinquish memory when free() is called; some will simply set
it aside for the next allocation.
Another possible (and related) issue is memory fragmentation. Again, it
depends on the memory allocator.

Regards

Antoine.


-- 
http://mail.python.org/mailman/listinfo/python-list


PyCObject & malloc creating memory leak

2010-09-29 Thread Tom Conneely
I'm attempting to write a library for reading data via USB from a
device and processing the data to display graphs. I have already
implemented parts of this code as pure python, as a proof of concept
but I have now moved on to implementing the functions in a C
extension.

My original plan was to have the data processing and data acquisition
functions running in separate processes, with a multiprocessing.Queue
for passing the raw data packets. The raw data is read in as a char*,
with a non constant length, hence I have allocated memory using
PyMem_Malloc and I am returning from the acquisition function a
PyCObject containing a pointer to this char* buffer, along with a
destructor. The following code shows a simple test function I've
written (with some module/class boilerplate removed) to demonstrate
this.

static void p_destruct(void *p) {
PyMem_Free((void*)p);
}

static PyObject *malloc_buffer(MyClass *k1) {

PyObject *cobj;
char *foo = PyMem_Malloc(1024 * sizeof(char));

if (foo == NULL) {
return NULL;
}

foo = "foo";
cobj = PyCObject_FromVoidPtr(foo, p_destruct);

return cobj;
}

static PyObject *retrieve_buffer(MyClass *k1, PyObject *args) {
char *foo2;
PyObject cobj2;

char *kwlist[] = {"foo1", NULL};

if (!PyArg_ParseTuple(args, "O", &cobj2)) {
return NULL;
}

foo2 = PyCObject_AsVoidPtr(cobj2);

//Do something
PySys_WriteStdout(foo2);

Py_RETURN_NONE;
}

So if I call these functions in a loop, e.g. The following will
generate ~10GB of data

x = MyClass()
for i in xrange(0, 10 * 2**20):
c = x.malloc_buffer()
x.retrieve_buffer(c)

All my memory disapears, until python crashes with a MemoryError. By
placing a print in the destructor function I know it's being called,
however it's not actually freeing the memory. So in short, what am I
doing wrong?

This is the first time I've written a non-trivial python C extension,
and I'm still getting my head round the whole Py_INC/DECREF and the
correct way to manage memory, so I spent a while playing around with
incref/decref but I left these out of my above example to keep what
I'm trying to achieve clearer.

Also, I'm aware PyCObject is deprecated in >=2.7 but I'm targeting
Python 2.6 at the moment, and I will move on to using capsules once
I've made the big jump with some other libraries. So if there is
anything that could be hugely different using capsules could you point
this out.

I'm developing using:
Python - 2.6.5
Windows XP (although linux is a future target platform)
msvc compiler

Cheers, any help would be greatly appreciated.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-26 Thread Navkirat Singh

On 27-Aug-2010, at 2:14 AM, Brad wrote:

> On Aug 25, 4:05 am, Alex McDonald  wrote:
>> Your example of writing code with
>> memory leaks *and not caring because it's a waste of your time* makes
>> me think that you've never been a programmer of any sort.
> 
> "Windows applications are immune from memory leaks since programmers
> can count on regular crashes to automatically release previously
> allocated RAM."
> -- 
> http://mail.python.org/mailman/listinfo/python-list


Sorry if I may sound rude, but I have to do this on the windows applications 
comment - hahahahaha
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-26 Thread Brad
On Aug 25, 4:05 am, Alex McDonald  wrote:
> Your example of writing code with
> memory leaks *and not caring because it's a waste of your time* makes
> me think that you've never been a programmer of any sort.

"Windows applications are immune from memory leaks since programmers
can count on regular crashes to automatically release previously
allocated RAM."
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-25 Thread Joshua Maurice
On Aug 25, 4:01 pm, John Passaniti  wrote:
> On Aug 25, 5:01 pm, Joshua Maurice  wrote:
>
> > I agree. Sadly, with managers, especially non-technical
> > managers, it's hard to make this case when the weasel
> > guy says "See! It's working.".
>
> Actually, it's not that hard.  The key to communicating the true cost
> of software development to non-technical managers (and even some
> technical ones!) is to express the cost in terms of a metaphor they
> can understand.  Non-technical managers may not understand the
> technology or details of software development, but they can probably
> understand money.  So finding a metaphor along those lines can help
> them to understand.
>
> http://c2.com/cgi/wiki?WardExplainsDebtMetaphor
>
> I've found that explaining the need to improve design and code quality
> in terms of a debt metaphor usually helps non-technical managers have
> a very real, very concrete understanding of the problem.  For example,
> telling a non-technical manager that a piece of code is poorly written
> and needs to be refactored may not resonate with them.  To them, the
> code "works" and isn't that the only thing that matters?  But put in
> terms of a debt metaphor, it becomes easier for them to see the
> problem.

But then it becomes a game of "How bad is this code exactly?" and "How
much technical debt have we accrued?". At least in my company's
culture, it is quite hard.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-25 Thread John Passaniti
On Aug 25, 5:01 pm, Joshua Maurice  wrote:
> I agree. Sadly, with managers, especially non-technical
> managers, it's hard to make this case when the weasel
> guy says "See! It's working.".

Actually, it's not that hard.  The key to communicating the true cost
of software development to non-technical managers (and even some
technical ones!) is to express the cost in terms of a metaphor they
can understand.  Non-technical managers may not understand the
technology or details of software development, but they can probably
understand money.  So finding a metaphor along those lines can help
them to understand.

http://c2.com/cgi/wiki?WardExplainsDebtMetaphor

I've found that explaining the need to improve design and code quality
in terms of a debt metaphor usually helps non-technical managers have
a very real, very concrete understanding of the problem.  For example,
telling a non-technical manager that a piece of code is poorly written
and needs to be refactored may not resonate with them.  To them, the
code "works" and isn't that the only thing that matters?  But put in
terms of a debt metaphor, it becomes easier for them to see the
problem.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-25 Thread John Bokma
John Passaniti  writes:

> On Aug 24, 8:00 pm, Hugh Aguilar  wrote:
>> The C programmers reading this are likely wondering why I'm being
>> attacked. The reason is that Elizabeth Rather has made it clear to
>> everybody that this is what she wants: [http://tinyurl.com/2bjwp7q]
>
> Hello to those outside of comp.lang.forth, where Hugh usually leaves
> his slime trail.  I seriously doubt many people will bother to read
> the message thread Hugh references, but if you do, you'll get to
> delight in the same nonsense Hugh has brought to comp.lang.forth.
> Here's the compressed version:

I did :-). I have somewhat followed Forth from a far, far distance since
the 80's (including hardware), and did read several messages in the
thread, also since it was not clear what Hugh was referring to.

-- 
John Bokma   j3b

Blog: http://johnbokma.com/Facebook: http://www.facebook.com/j.j.j.bokma
Freelance Perl & Python Development: http://castleamber.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-25 Thread Joshua Maurice
On Aug 25, 1:44 pm, John Passaniti  wrote:
> On Aug 24, 9:05 pm, Hugh Aguilar  wrote:
>
> > What about using what I learned to write programs that work?
> > Does that count for anything?
>
> It obviously counts, but it's not the only thing that matters.  Where
> I'm employed, I am currently managing a set of code that "works" but
> the quality of that code is poor.  The previous programmer suffered
> from a bad case of cut-and-paste programming mixed with a
> unsophisticated use of the language.  The result is that this code
> that "works" is a maintenance nightmare, has poor performance, wastes
> memory, and is very brittle.  The high level of coupling between code
> means that when you change virtually anything, it invariably breaks
> something else.
>
> And then you have the issue of the programmer thinking the code
> "works" but it doesn't actually meet the needs of the customer.  The
> same code I'm talking about has a feature where you can pass message
> over the network and have the value you pass configure a parameter.
> It "works" fine, but it's not what the customer wants.  The customer
> wants to be able to bump the value up and down, not set it to an
> absolute value.  So does the code "work"?  Depends on the definition
> of "work."
>
> In my experience, there are a class of software developers who care
> only that their code "works" (or more likely, *appears* to work) and
> think that is the gold standard.  It's an attitude that easy for
> hobbyists to take, but not one that serious professionals can afford
> to have.  A hobbyist can freely spend hours hacking away and having a
> grand time writing code.  Professionals are paid for their efforts,
> and that means that *someone* is spending both time and money on the
> effort.  A professional who cares only about slamming out code that
> "works" is invariably merely moving the cost of maintaining and
> extending the code to someone else.  It becomes a hidden cost, but why
> do they care... it isn't here and now, and probably won't be their
> problem.

I agree. Sadly, with managers, especially non-technical managers, it's
hard to make this case when the weasel guy says "See! It's working.".
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-25 Thread John Passaniti
On Aug 24, 9:05 pm, Hugh Aguilar  wrote:
> What about using what I learned to write programs that work?
> Does that count for anything?

It obviously counts, but it's not the only thing that matters.  Where
I'm employed, I am currently managing a set of code that "works" but
the quality of that code is poor.  The previous programmer suffered
from a bad case of cut-and-paste programming mixed with a
unsophisticated use of the language.  The result is that this code
that "works" is a maintenance nightmare, has poor performance, wastes
memory, and is very brittle.  The high level of coupling between code
means that when you change virtually anything, it invariably breaks
something else.

And then you have the issue of the programmer thinking the code
"works" but it doesn't actually meet the needs of the customer.  The
same code I'm talking about has a feature where you can pass message
over the network and have the value you pass configure a parameter.
It "works" fine, but it's not what the customer wants.  The customer
wants to be able to bump the value up and down, not set it to an
absolute value.  So does the code "work"?  Depends on the definition
of "work."

In my experience, there are a class of software developers who care
only that their code "works" (or more likely, *appears* to work) and
think that is the gold standard.  It's an attitude that easy for
hobbyists to take, but not one that serious professionals can afford
to have.  A hobbyist can freely spend hours hacking away and having a
grand time writing code.  Professionals are paid for their efforts,
and that means that *someone* is spending both time and money on the
effort.  A professional who cares only about slamming out code that
"works" is invariably merely moving the cost of maintaining and
extending the code to someone else.  It becomes a hidden cost, but why
do they care... it isn't here and now, and probably won't be their
problem.

> If I don't have a professor to pat me on the back, will my
> programs stop working?

What a low bar you set for yourself.  Does efficiency, clarity,
maintainability, extensibility, and elegance not matter to you?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-25 Thread John Passaniti
On Aug 24, 8:00 pm, Hugh Aguilar  wrote:
> The C programmers reading this are likely wondering why I'm being
> attacked. The reason is that Elizabeth Rather has made it clear to
> everybody that this is what she wants: [http://tinyurl.com/2bjwp7q]

Hello to those outside of comp.lang.forth, where Hugh usually leaves
his slime trail.  I seriously doubt many people will bother to read
the message thread Hugh references, but if you do, you'll get to
delight in the same nonsense Hugh has brought to comp.lang.forth.
Here's the compressed version:

1.  Hugh references code ("symtab") that he wrote (in Factor) to
manage symbol tables.
2.  I (and others) did some basic analysis and found it to be a poor
algorithm-- both in terms of memory use and performance-- especially
compared to the usual solutions (hash tables, splay trees, etc.).
3.  I stated that symtab sucked for the intended application.
4.  Hugh didn't like that I called his baby ugly and decided to expose
his bigotry.
5.  Elizabeth Rather said she didn't appreciate Hugh's bigotry in the
newsgroup.

Yep, that's it.  What Hugh is banking on is that you won't read the
message thread, and that you'll blindly accept that Elizabeth is some
terrible ogre with a vendetta against Hugh.  The humor here is that
Hugh himself provides a URL that disproves that!  So yes, if you care,
do read the message thread.  It won't take long for you to get a clear
impression of Hugh's character.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-25 Thread Nick Keighley
On 19 Aug, 16:25, c...@tiac.net (Richard Harter) wrote:
> On Wed, 18 Aug 2010 01:39:09 -0700 (PDT), Nick Keighley
>  wrote:
> >On 17 Aug, 18:34, Standish P  wrote:

> >> How are these heaps being implemented ? Is there some illustrative
> >> code or a book showing how to implement these heaps in C for example ?
>
> >any book of algorithms I'd have thought

my library is currently inaccessible. Normally I'd have picked up
Sedgewick and seen what he had to say on the subject. And possibly
Knuth (though that requires taking more of a deep breath).

Presumably Plauger's library book includes an implementation of
malloc()/free() so that might be a place to start.

> >http://en.wikipedia.org/wiki/Dynamic_memory_allocation
> >http://www.flounder.com/inside_storage_allocation.htm
>
> >I've no idea how good either of these is

serves me right for not checking
:-(

> The wikipedia page is worthless.  

odd really, you'd think basic computer science wasn't that hard...
I found even wikipedia's description of a stack confusing and heavily
biased towards implementation

> The flounder page has
> substantial meat, but the layout and organization is a mess.  A
> quick google search didn't turn up much that was general - most
> articles are about implementations in specific environments.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-25 Thread Anton Ertl
Alex McDonald  writes:
> Your example of writing code with
>memory leaks *and not caring because it's a waste of your time* makes
>me think that you've never been a programmer of any sort. Ever.

Well, I find his approach towards memory leaks as described in
<779b992b-7199-4126-bf3a-7ec40ea80...@j18g2000yqd.googlegroups.com>
quite sensible, use something like that myself, and recommend it to
others.

Followups set to c.l.f (adjust as appropriate).

- anton
-- 
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
 New standard: http://www.forth200x.org/forth200x.html
   EuroForth 2010: http://www.euroforth.org/ef10/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-25 Thread Alex McDonald
On 25 Aug, 01:00, Hugh Aguilar  wrote:
> On Aug 24, 4:17 pm, Richard Owlett  wrote:
>
> > Hugh Aguilar wrote:
> > > [SNIP ;]
>
> > > The real problem here is that C, Forth and C++ lack automatic garbage
> > > collection. If I have a program in which I have to worry about memory
> > > leaks (as described above), I would be better off to ignore C, Forth
> > > and C++ and just use a language that supports garbage collection. Why
> > > should I waste my time carefully freeing up heap space? I will very
> > > likely not find everything but yet have a few memory leaks anyway.
>
> > IOW Hugh has surpassed GIGO to achieve AGG -
> > *A*utomatic*G*arbage*G*eneration ;)
>
> The C programmers reading this are likely wondering why I'm being
> attacked. The reason is that Elizabeth Rather has made it clear to
> everybody that this is what she 
> wants:http://groups.google.com/group/comp.lang.forth/browse_thread/thread/c...
>
> Every Forth programmer who aspires to get a job at Forth Inc. is
> obliged to attack me. Attacking my software that I posted on the FIG
> site is preferred, but personal attacks work too. It is a loyalty
> test.

Complete bollox. A pox on your persecution fantasies.

This isn't about Elizabeth Rather or Forth Inc. It's about your
massive ego and blind ignorance. Your example of writing code with
memory leaks *and not caring because it's a waste of your time* makes
me think that you've never been a programmer of any sort. Ever.

In a commercial environment, your slide rule code would be rejected
during unit testing, and you'd be fired and your code sent to the bit
bucket.

This isn't about CS BS; this is about making sure that banks accounts
square, that planes fly, that nuclear reactors stay sub-critical; that
applications can run 24 by 7, 365 days a year without requiring any
human attention.

So who designs and writes compilers for fail-safe systems? Who designs
and writes operating systems that will run for years, non-stop? Where
do they get the assurance that what they're writing is correct -- and
provably so? From people that do research, hard math, have degrees,
and design algorithms and develop all those other abstract ideas you
seem so keen to reject as high-falutin' nonsense.

I'd rather poke myself in the eye than run any of the crap you've
written.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-25 Thread David Kastrup
Hugh Aguilar  writes:

> On Aug 24, 5:16 pm, Paul Rubin  wrote:
>> Anyway, as someone else once said, studying a subject like CS isn't done
>> by reading.  It's done by writing out answers to problem after problem.
>> Unless you've been doing that, you haven't been studying.
>
> What about using what I learned to write programs that work? Does that
> count for anything?

No.  Having put together a cupboard that holds some books without
falling apart does not make you a carpenter, much less an architect.

-- 
David Kastrup
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread John Bokma
John Bokma  writes:

> At an university which languages you see depend a lot on what your
> teachers use themselves. A language is just a verhicle to get you from a
> to b.

Addendum: or to illustrate a concept (e.g. functional programming, oop)

[..]
> Like you, you mean? You consider yourself quite the expert on how people
> educate and what they learn when educated in a formal
> environment. Without (if I recall correctly) only second hand
   ^^^

   Should've written "With", of course.

> information and guessing.

-- 
John Bokma   j3b

Blog: http://johnbokma.com/Facebook: http://www.facebook.com/j.j.j.bokma
Freelance Perl & Python Development: http://castleamber.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread John Bokma
Hugh Aguilar  writes:

> This is also the attitude that I find among college graduates. They
> just believe what their professors told them in college, and there is
> no why.

Which college is that? It doesn't agree with my experiences. In CS quite
a lot has to be proven with a formal proof, exactly the opposite from
what you claim. And after some time students want to see the proof and
certainly don't accept "there is no why!" unless it's a trivial thing.

Maybe it's because your anecdote is an interpretation from a distance,
not based on the actual experience?

> This is essentially the argument being made above --- that C
> is taught in college and Forth is not, therefore C is good and Forth
> is bad --- THERE IS NO WHY!

At an university which languages you see depend a lot on what your
teachers use themselves. A language is just a verhicle to get you from a
to b. What a good study should teach you is how to drive the verhicle
without accidents and not that a red one is the best. From top of my
head I've seen 20+ languages during my study at the University of
Utrecht. Forth wasn't one of them, but I already knew about Forth before
I went to the UU. On top of that I had written an extremely minimalistic
Forth in Z80 assembly years before I went to the UU (based on the work
of someone else).

> People who promote "idiomatic" programming are essentially trying to
> be Yoda. They want to criticize people even when those people's
> programs work.

"Works" doesn't mean that a program is good or what. There is a lot to
say about a program that works, even one that works flawless. I do it
all the time about my own programs. It's good to be critical about your
own work. And if you're a teacher, it's good to provide positive feedback.

> They are just faking up their own expertise ---

Like you, you mean? You consider yourself quite the expert on how people
educate and what they learn when educated in a formal
environment. Without (if I recall correctly) only second hand
information and guessing.

> many of them have never actually written a program that works
> themselves.

Quite some part of CS can be done without writing a single line of code.

> The reason why I like programming is because there is an inherent anti-
> bullshit mechanism in programming. Your program either works or it
> doesn't.

Now can you provide a formal proof that it works, or do you just
consider running the program a few times sufficient proof that "it works"?

> If your program doesn't work, then it doesn't matter if it is
> idiomatic, if you have a college degree, etc., etc.. That is the way I
> see it, anyway.

Well, you see it wrong. A program that doesn't work and is idiomatic is
easier to make work and to verify by others that it works. A program
that's the result of trial-and-error (that's what quite some people end
up doing who are self-taught) is a pain in the ass (pardon my French) to
maintain or to extend.

> This perspective doesn't hold for much on
> comp.lang.forth where we have people endlessly spouting blather
> *about* programming,

and you are different how? Also note that your post is crossposted to
several other groups.

> without actually doing any programming themselves. This is why I don't
> take c.l.f. very seriously; people attack me all of the time and I
> don't really care 

heh, hence all the replies you write, and mentioning it in this post.

-- 
John Bokma   j3b

Blog: http://johnbokma.com/Facebook: http://www.facebook.com/j.j.j.bokma
Freelance Perl & Python Development: http://castleamber.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread John Bokma
Hugh Aguilar  writes:

> On Aug 24, 5:16 pm, Paul Rubin  wrote:
>> Anyway, as someone else once said, studying a subject like CS isn't done
>> by reading.  It's done by writing out answers to problem after problem.
>> Unless you've been doing that, you haven't been studying.
>
> What about using what I learned to write programs that work? Does that
> count for anything?

Of course it does; but who's going to verify your program?

> If I don't have a professor to pat me on the back, will my programs
> stop working? That sounds more like magic than technology.

I am sure you know what Paul means. As for patting on the back: you must
make a hell of an effort to get that.

-- 
John Bokma   j3b

Blog: http://johnbokma.com/Facebook: http://www.facebook.com/j.j.j.bokma
Freelance Perl & Python Development: http://castleamber.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Hugh Aguilar
On Aug 21, 10:57 pm, Steven D'Aprano  wrote:
> Anyway, I'm looking forward to hear why overuse of the return stack is a
> big reason why people use GCC rather than Forth. (Why GCC? What about
> other C compilers?) Me, in my ignorance, I thought it was because C was
> invented and popularised by the same universities which went on to teach
> it to millions of programmers, and is firmly in the poplar and familiar
> Algol family of languages, while Forth barely made any impression on
> those universities, and looks like line-noise and reads like Yoda. (And
> I'm saying that as somebody who *likes* Forth and wishes he had more use
> for it.) In my experience, the average C programmer wouldn't recognise a
> return stack if it poked him in the eye.

"The Empire Strikes Back" was a popular movie. I read an article ("The
puppet like, I do not") criticizing the movie though. At one point,
Luke asked why something was true that Yoda had told him, and Yoda
replied: "There is no why!" The general idea is that the sudent (Luke)
was supposed to blindly accept what the professor (Yoda) tells him. If
he asks "why?," he gets yelled at.

This is also the attitude that I find among college graduates. They
just believe what their professors told them in college, and there is
no why. This is essentially the argument being made above --- that C
is taught in college and Forth is not, therefore C is good and Forth
is bad --- THERE IS NO WHY!

People who promote "idiomatic" programming are essentially trying to
be Yoda. They want to criticize people even when those people's
programs work. They are just faking up their own expertise --- many of
them have never actually written a program that works themselves.

The reason why I like programming is because there is an inherent anti-
bullshit mechanism in programming. Your program either works or it
doesn't. If your program doesn't work, then it doesn't matter if it is
idiomatic, if you have a college degree, etc., etc.. That is the way I
see it, anyway. This perspective doesn't hold for much on
comp.lang.forth where we have people endlessly spouting blather
*about* programming, without actually doing any programming
themselves. This is why I don't take c.l.f. very seriously; people
attack me all of the time and I don't really care --- I know that my
programs work, which is what matters in the real world.

(Pardon my use of the word "bullshit" above; there is no better term
available.)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Hugh Aguilar
On Aug 24, 5:16 pm, Paul Rubin  wrote:
> Anyway, as someone else once said, studying a subject like CS isn't done
> by reading.  It's done by writing out answers to problem after problem.
> Unless you've been doing that, you haven't been studying.

What about using what I learned to write programs that work? Does that
count for anything?

If I don't have a professor to pat me on the back, will my programs
stop working? That sounds more like magic than technology.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread John Bokma
Paul Rubin  writes:

> Hugh Aguilar  writes:
>> I've read a lot of graduate-level CS books.
>
> Reading CS books doesn't make you a computer scientist any more than
> listening to violin records makes you a violinist.  Write out answers to
> all the exercises in those books, and get your answers to the more
> difficult ones checked by a professor, and you'll be getting somewhere.
> That's the point someone else was making about self-study: without
> someone checking your answers at first, it's easy to not learn to
> recogize your own mistakes.
>
> Anyway, as someone else once said, studying a subject like CS isn't done
> by reading.  It's done by writing out answers to problem after problem.
> Unless you've been doing that, you haven't been studying.

Yup. I would like to add the following three:

1) being able to teach to peers what you've read.

   As explained in a post I made: during several courses I took you got
   a paper from your teacher and had to teach in front of the class the
   next week. Those papers are quite hard to grasp on the first reading
   even if you know quite a bit of the topic. Understanding it enough
   to teach in front of a class and being able to handle the question
   round, in which the teacher participates, is quite a killer.

2) being able to program on paper / understand programs on paper.

   On several exams I had to write small programs on paper. The
   solutions had to compile (i.e. missing a ; for languages that
   required so was counted against you, or using optional ;).  One exam
   was about OOP and several OO languages were taught, and hence on
   paper one had to provide solutions in C++, Objective-C, Object
   Pascal, Smalltalk, Eiffel, etc. No compiler(s) handy.

   And of course questions like: what's wrong with this piece of code
   and how should it be written.

3) being able to write papers and a thesis (or two)

   No explanation needed, quite some people have no problem reading the
   required books, passing the exams, but need quite some time to do
   this (and some give up on it).

-- 
John Bokma   j3b

Blog: http://johnbokma.com/Facebook: http://www.facebook.com/j.j.j.bokma
Freelance Perl & Python Development: http://castleamber.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Paul Rubin
Hugh Aguilar  writes:
> I've read a lot of graduate-level CS books.

Reading CS books doesn't make you a computer scientist any more than
listening to violin records makes you a violinist.  Write out answers to
all the exercises in those books, and get your answers to the more
difficult ones checked by a professor, and you'll be getting somewhere.
That's the point someone else was making about self-study: without
someone checking your answers at first, it's easy to not learn to
recogize your own mistakes.

Anyway, as someone else once said, studying a subject like CS isn't done
by reading.  It's done by writing out answers to problem after problem.
Unless you've been doing that, you haven't been studying.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Richard Owlett

Hugh Aguilar wrote:

On Aug 24, 4:17 pm, Richard Owlett  wrote:

Hugh Aguilar wrote:

[SNIP ;]



The real problem here is that C, Forth and C++ lack automatic garbage
collection. If I have a program in which I have to worry about memory
leaks (as described above), I would be better off to ignore C, Forth
and C++ and just use a language that supports garbage collection. Why
should I waste my time carefully freeing up heap space? I will very
likely not find everything but yet have a few memory leaks anyway.


IOW Hugh has surpassed GIGO to achieve AGG -
*A*utomatic*G*arbage*G*eneration ;)


The C programmers reading this are likely wondering why I'm being
attacked. The reason is that Elizabeth Rather has made it clear to
everybody that this is what she wants:
http://groups.google.com/group/comp.lang.forth/browse_thread/thread/c37b473ec4da66f1

Every Forth programmer who aspires to get a job at Forth Inc. is
obliged to attack me. Attacking my software that I posted on the FIG
site is preferred, but personal attacks work too. It is a loyalty
test.


*SNICKER SNICKER LOL*
I am not now, nor have been a professional programmer.
I still recognize you.
P.S. - ever read "The Emperor's New Clothes"


--
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Hugh Aguilar
On Aug 24, 4:17 pm, Richard Owlett  wrote:
> Hugh Aguilar wrote:
> > [SNIP ;]
>
> > The real problem here is that C, Forth and C++ lack automatic garbage
> > collection. If I have a program in which I have to worry about memory
> > leaks (as described above), I would be better off to ignore C, Forth
> > and C++ and just use a language that supports garbage collection. Why
> > should I waste my time carefully freeing up heap space? I will very
> > likely not find everything but yet have a few memory leaks anyway.
>
> IOW Hugh has surpassed GIGO to achieve AGG -
> *A*utomatic*G*arbage*G*eneration ;)

The C programmers reading this are likely wondering why I'm being
attacked. The reason is that Elizabeth Rather has made it clear to
everybody that this is what she wants:
http://groups.google.com/group/comp.lang.forth/browse_thread/thread/c37b473ec4da66f1

Every Forth programmer who aspires to get a job at Forth Inc. is
obliged to attack me. Attacking my software that I posted on the FIG
site is preferred, but personal attacks work too. It is a loyalty
test.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread John Bokma
Hugh Aguilar  writes:

> On Aug 22, 11:12 am, John Bokma  wrote:
>
>> And my
>> experience is that a formal study in CS can't compare to home study
>> unless you're really good and have the time and drive to read formal
>> books written on CS. And my experience is that most self-educaters don't
>> have that time.
>
> I've read a lot of graduate-level CS books. I think most self-educated
> programmers have read more of these books than have 4-year degree
> students who were not required to in order to get their Bachelors
> degree and who were too busy during college to read anything that
> wasn't required.

I doubt it. But this all comes back to what I earlier wrote: those with
a CS degree think they are better than people without, and people
without think they can achieve the same or better by just buying a few
books and reading them. On top of that, most of the people I knew in my
final year were very fanatic regarding CS: it was a hobby to
them. During coffeebreaks we talked about approximation algorithms for
TSPs for example. Not always, but it happened. I read plenty of books
during my studies that were not on the list, as did other students I
knew.

If I recall correctly, you don't have a CS degree. I do, and I can tell
you that your /guess/ (since that is all it is) is wrong. For most exams
I've done one had not only to have read the entire book (often in a very
short time), but also the hand-outs. And for quite some courses
additional material was given during the course itself, so not attending
all classes could result in a lower score. Reading additional books and
papers helped. Sometimes reading a book by a different author could be a
real eye opener (and the students I had contact with did exactly this).

On top of that, often in class excercises were done, and with some
courses I had to hand in home work (yikes).

Also, most books are easy to read compared to CS papers. In my final two
years I did several courses which solely consisted of reading a CS paper
and giving a presentation on the subject in front of your classmates
(and sometimes other interested people). Reading and understanding such
a paper is one (and quite an effort). Teaching it in front of a (small)
class within a few days is not easy, to say the least. We also had to
attend several talks by guest speakers. I went to more than the required
number, including a guest talk by Linus. When there was a break-through
in proving Fermat's last theorem there was a talk, which I attended,
like several other class mates.

I am sure there are students who are there just to get a degree and to
make money. But my class mates didn't fall into that category, or I have
missed something.

So yes, I am convinced that there are plenty of self-educated people who
can code circles around me or plenty of other people with a CS
degree. But IMO those people are very hard to find. Most people
overestimate their skills, with or without a degree; I am sure I do. And
it wouldn't surprise me if self-educated people do this more so.

>> On the other hand: some people I knew during my studies had no problem
>> at all with introducing countless memory leaks in small programs (and
>> turning off compiler warnings, because it gave so much noise...)
>
> I do this all the time. My slide-rule program, for example, has beau-
> coup memory leaks. When I have time to mess with the program I clean
> up these memory leaks, but it is not a big deal. The program just
> runs, generates the gcode and PostScript, and then it is done. I don't
> really worry about memory leaks except with programs that are run
> continuously and have a user-interface, because they can eventually
> run out of memory.

Oh boy, I think you just made my point for me...

> The real problem here is that C, Forth and C++ lack automatic garbage
> collection. If I have a program in which I have to worry about memory
> leaks (as described above), I would be better off to ignore C, Forth
> and C++ and just use a language that supports garbage collection.

Several languages that support garbage collection still are able to leak
memory when circular datastructures are used (for example). Also,
allocating memory and never giving it back (by keeping a reference to
it) can also be memory leaking. And the wrong form of optimization can
result in a program using more memory than necessary. On top of that,
you have to understand when the gc releases memory, and things like
memory fragmentation. In short: you still have to use your head (on some
occasions even more).

> Why should I waste my time carefully freeing up heap space? I will
> very likely not find everything but yet have a few memory leaks
> anyway.

Why should you waste time with carefully checking for other issues? In
my experience, once you become sloppy with one aspect it's very easy to
become sloppy with others as well.

-- 
John Bokma   j3b

Blog: http://johnbokma.com/Facebook: http:/

Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Hugh Aguilar
On Aug 24, 9:24 am, David Kastrup  wrote:
> Anybody worth his salt in his profession has a trail of broken things in
> his history.

When I was employed as a Forth programmer, I worked for two brothers.
The younger one told me a funny story about when he was 13 or 14 years
old. He bought a radio at a garage sale. The radio worked perfectly,
except that it had no case. He was mighty proud of his radio and was
admiring it, but he noticed that the tubes were dusty. That wouldn't
do! Such a wonderful radio ought to look as good as it sounds! So he
removed the tubes and cleaned them all off with a soft cloth. At this
time it occurred to him that maybe he should have kept track of which
sockets the tubes had come out of. He put the tubes back in so that
they looked correct, but he couldn't be sure.

Fortunately, his older brother who was in high school knew
*everything* about electronics, or at least, that is what he claimed.
So the boy gets his big brother and asks him. The brother says: "There
is one way to know for sure if the tubes are in correctly or not ---
plug the radio in." He plugs in the radio; it makes a crackling noise
and begins to smoke. The boy desperately yanks the cord, but it is too
late; his wonderful radio is toast. The older brother says: "Now you
know!"
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Richard Owlett

Hugh Aguilar wrote:

[SNIP ;]

The real problem here is that C, Forth and C++ lack automatic garbage
collection. If I have a program in which I have to worry about memory
leaks (as described above), I would be better off to ignore C, Forth
and C++ and just use a language that supports garbage collection. Why
should I waste my time carefully freeing up heap space? I will very
likely not find everything but yet have a few memory leaks anyway.


IOW Hugh has surpassed GIGO to achieve AGG - 
*A*utomatic*G*arbage*G*eneration ;)



--
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Hugh Aguilar
On Aug 22, 11:12 am, John Bokma  wrote:

> And my
> experience is that a formal study in CS can't compare to home study
> unless you're really good and have the time and drive to read formal
> books written on CS. And my experience is that most self-educaters don't
> have that time.

I've read a lot of graduate-level CS books. I think most self-educated
programmers have read more of these books than have 4-year degree
students who were not required to in order to get their Bachelors
degree and who were too busy during college to read anything that
wasn't required.

> On the other hand: some people I knew during my studies had no problem
> at all with introducing countless memory leaks in small programs (and
> turning off compiler warnings, because it gave so much noise...)

I do this all the time. My slide-rule program, for example, has beau-
coup memory leaks. When I have time to mess with the program I clean
up these memory leaks, but it is not a big deal. The program just
runs, generates the gcode and PostScript, and then it is done. I don't
really worry about memory leaks except with programs that are run
continuously and have a user-interface, because they can eventually
run out of memory.

The real problem here is that C, Forth and C++ lack automatic garbage
collection. If I have a program in which I have to worry about memory
leaks (as described above), I would be better off to ignore C, Forth
and C++ and just use a language that supports garbage collection. Why
should I waste my time carefully freeing up heap space? I will very
likely not find everything but yet have a few memory leaks anyway.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread John Bokma
David Kastrup  writes:

> John Bokma  writes:
>
>> On the other hand: some people I knew during my studies had no problem
>> at all with introducing countless memory leaks in small programs (and
>> turning off compiler warnings, because it gave so much noise...)
>
> [...]
>
>> As for electrical engineering: done that (BSc) and one of my class
>> mates managed to connect a transformer the wrong way
>> around twice. Yet he had the highest mark in our class.
>
> Anybody worth his salt in his profession has a trail of broken things in
> his history.

Sure. The long version is: he blew up his work when he connected the
transformer wrong. He borrowed someone else's board and blew that one up
as well.

> The faster it thinned out, the better he learned.

He he he, his internships went along similar lines. Maybe he loved to
blow up things.

> The only reliable way never to break a thing is not to touch it in the
> first place.  But that will not help you if it decides to break on its
> own.

I don't think transfomers connect themselfs in the wrong way ;-). I
agree with that accidents do happen, but some people just manage to make
accidents happen way above average. And in that case they might start to
think if it's a good idea them touching things.

-- 
John Bokma   j3b

Blog: http://johnbokma.com/Facebook: http://www.facebook.com/j.j.j.bokma
Freelance Perl & Python Development: http://castleamber.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Hugh Aguilar
On Aug 21, 12:18 pm, ehr...@dk3uz.ampr.org (Edmund H. Ramm) wrote:
> In <2d59bfaa-2aa5-4396-bd03-22200df8c...@x21g2000yqa.googlegroups.com> Hugh 
> Aguilar  writes:
>
> > [...]
> > I really recommend that people spend a lot more time writing code,
> > and a lot less time with all of this pseudo-intellectual nonsense.
> > [...]
>
>    I energetically second that!
> --
>       e-mail: dk3uz AT arrl DOT net  |  AMPRNET: dk...@db0hht.ampr.org
>       If replying to a Usenet article, please use above e-mail address.
>                Linux/m68k, the best U**x ever to hit an Atari!

What open-source code have you posted publicly?

BTW, why did you request that your post not be archived, and be
removed in a few days? That doesn't seem very energetic. Also, now
that I've responded to it, it will be archived forever. It is so rare
that anybody agrees with me, I wanted to make a permanent record. :-)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Richard Owlett

David Kastrup wrote:

John Bokma  writes:


On the other hand: some people I knew during my studies had no problem
at all with introducing countless memory leaks in small programs (and
turning off compiler warnings, because it gave so much noise...)


[...]


As for electrical engineering: done that (BSc) and one of my class
mates managed to connect a transformer the wrong way
around twice. Yet he had the highest mark in our class.


Anybody worth his salt in his profession has a trail of broken things in
his history.  The faster it thinned out, the better he learned.  The
only reliable way never to break a thing is not to touch it in the first
place.  But that will not help you if it decides to break on its own.



*LOL* !!!
I remember the day a very senior field service engineer for a 
multi-national minicomputer mfg plugged 16k (or was it 32k) of 
core (back when a core was visible to naked eye ;) the wrong way 
into a backplane. After the smoke cleared ... snicker snicker.


I also remember writing a failure report because someone 
installed a grounding strap 100 degrees out of orientation on a 
piece of multi kV switchgear.(don't recall nominal capacity, buck 
backup generator was rated for 1.5 MW continuous ;) P.S. failure 
was demonstrated as manufacturer's senior sales rep was 
demonstrating how easy it was to do maintenance on the system. 
There were times I had fun writing up inspection reports.





--
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread David Kastrup
John Bokma  writes:

> On the other hand: some people I knew during my studies had no problem
> at all with introducing countless memory leaks in small programs (and
> turning off compiler warnings, because it gave so much noise...)

[...]

> As for electrical engineering: done that (BSc) and one of my class
> mates managed to connect a transformer the wrong way
> around twice. Yet he had the highest mark in our class.

Anybody worth his salt in his profession has a trail of broken things in
his history.  The faster it thinned out, the better he learned.  The
only reliable way never to break a thing is not to touch it in the first
place.  But that will not help you if it decides to break on its own.

-- 
David Kastrup
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-24 Thread Alex McDonald
On 24 Aug, 01:00, Hugh Aguilar  wrote:
> On Aug 21, 12:32 pm, Alex McDonald  wrote:
>
> > "Scintilla" gets about 2,080,000 results on google; "blather" gets
> > about 876,000 results. O Hugh, you pseudo-intellectual you!
>
> > > with gutter language such as
> > > "turd"
>
> > About 5,910,000 results. It has a long history, even getting a mention
> > in the Wyclif's 13th century bible.
>
> You looked up "blather" and "turd" on google *AND* you are not a
> pseudo-intellectual??? That is funny!
>
> I don't consider myself to be a pseudo-intellectual. I don't have any
> education however, so a pseudo-intellectual is the only kind of
> intellectual that I could be.

I don't have any formal CS education, nor a degree in anything else.
But that doesn't make me an anti-intellectual by instinct (the
instinct would be jealousy, I guess), nor does it stop me from
learning. Or using Google, something I'm sure you do too.

We have a great degree of admiration and fondness for intellectuals in
Europe; the French in particular hold them in very high regard.
Perhaps disdain of learning and further education is peculiar to a
certain section of American society, as the label
"intellectual" (often, "liberal intellectual") appears to be used as a
derogatory term. I have no idea what a pseudo-intellectual might be,
but it's evident you mean it in much the same way.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-23 Thread Hugh Aguilar
On Aug 22, 3:40 pm, 1001nuits <1001nu...@gmail.com> wrote:
> Another thing you learn in studying in University is the fact that you can  
> be wrong, which is quite difficult to accept for self taught people.

Yet another thing you learn in studying in University, is the art of
apple polishing! LOL

If a person has graduated from college, it is not clear what if
anything he has learned of a technical nature --- but it can be
assumed that he has learned to be a head-bobber (someone who
habitually bobs his head up and down in agreement when the boss is
speaking) and has learned to readily admit to being wrong when
pressured (when the boss looks at him without smiling for more than
two seconds). These are the traits that bosses want in an employee ---
that prove the employee to be "trainable."

BTW, has anybody actually looked at my software?
http://www.forth.org/novice.html

All this pseudo-intellectual nonsense (including this post) is getting
boring. Why don't we try discussing software for a while? I wrote that
slide-rule program as a showcase of Forth. I've been thinking of
porting it over to another language, possibly C. Maybe one of you C
experts could write the C program though, as a comparison --- to show
how much better C is than Forth. You can demonstrate that my code was
badly written and strangely designed --- with a concrete example,
rather than just a lot hand-waving and chest-thumping.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-23 Thread Hugh Aguilar
On Aug 21, 12:32 pm, Alex McDonald  wrote:
> "Scintilla" gets about 2,080,000 results on google; "blather" gets
> about 876,000 results. O Hugh, you pseudo-intellectual you!
>
> > with gutter language such as
> > "turd"
>
> About 5,910,000 results. It has a long history, even getting a mention
> in the Wyclif's 13th century bible.

You looked up "blather" and "turd" on google *AND* you are not a
pseudo-intellectual??? That is funny!

I don't consider myself to be a pseudo-intellectual. I don't have any
education however, so a pseudo-intellectual is the only kind of
intellectual that I could be.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-22 Thread 1001nuits


Le Sun, 22 Aug 2010 20:12:36 +0200, John Bokma  a  
écrit:



David Kastrup  writes:


John Bokma  writes:


David Kastrup  writes:


John Passaniti  writes:

Amen!  All this academic talk is useless.  Who cares about things  
like

the big-O notation for program complexity.  Can't people just *look*
at code and see how complex it is?!  And take things like the years  
of
wasted effort computer scientists have put into taking data  
structures

(like hashes and various kinds of trees) and extending them along
various problem domains and requirements.  Real programmers don't
waste their time with learning that junk.  What good did any of that
ever do anyone?!


It is my experience that in particular graduated (and in particular  
Phd)

computer scientists don't waste their time _applying_ that junk.


Question: do you have a degree in computer science?

Since in my experience: people who talk about their experience with
graduated people often missed the boat themselves and think that  
reading

a book or two equals years of study.


I have a degree in electrical engineering.  But that's similarly
irrelevant.


Nah, it's not: your attitude towards people with a degree in computer
science agrees with what I wrote.


That has not particularly helped my respect towards CS majors and PhDs
in the function of programmers (and to be honest: their education is not
intended to make them good programmers, but to enable them to _lead_
good programmers).


I disagree.


That does not mean that I am incapable of analyzing, say quicksort and
mergesort,


Oh, that's what I was not implying. I am convinced that quite some
people who do self-study can end up with better understanding of things
than people who do it for a degree. I have done both: I already was
programming in several languages before I was studying CS. And my
experience is that a formal study in CS can't compare to home study
unless you're really good and have the time and drive to read formal
books written on CS. And my experience is that most self-educaters don't
have that time.

On the other hand: some people I knew during my studies had no problem
at all with introducing countless memory leaks in small programs (and
turning off compiler warnings, because it gave so much noise...)


Donald Knuth never studied computer science.


Yes, yes, and Albert Einstein worked at an office.

Those people are very rare.

But my experience (see for plenty of examples: Slashdot) is that quite
some people who don't have a degree think that all that formal education
is just some paper pushing and doesn't count. While some of those who do
have the paper think they know it all. Those people who are right in
either group are a minority in my experience.

As for electrical engineering: done that (BSc) and one of my class mates
managed to connect a transformer the wrong way around twice. Yet he
had the highest mark in our class.

So in short: yes, self-study can make you good at something. But
self-study IMO is not in general a replacement for a degree. Someone who
can become great after self-study would excel at a formal study and
learn more. Study works best if there is competition and if there are
challenges. I still study a lot at home, but I do miss the challenges
and competition.



Hi all,

I quite agree with the fact that self learning is not enough.

Another thing you learn in studying in University is the fact that you can  
be wrong, which is quite difficult to accept for self taught people. When  
you work in groups, you are bound to admit that you don't have the best  
solution all the time. To my experience, self-taught people I worked with  
had tremendous difficulties to accept that they were wrong, that their  
design was badly done, that their code was badly written or strangely  
designed.


Because self teaching was done with a lot of efforts, in particular to  
figure out complex problems on their own. Most of the time, the self  
learned people are attached to the things they learned by themselves and  
have difficulties to envisage that being right of wrong is often not an  
issue provided the group comes to the best option. They often live  
contradiction as a personal offense while it is just work, you know.


That's another interest of the degree, confrontation with other people  
that have the same background. And letting the things learned at the place  
they should be and not in the affective area.


1001




--
Utilisant le logiciel de courrier révolutionnaire d'Opera :  
http://www.opera.com/mail/

--
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-22 Thread John Bokma
David Kastrup  writes:

> John Bokma  writes:
>
>> David Kastrup  writes:
>>
>>> John Passaniti  writes:
>>>
 Amen!  All this academic talk is useless.  Who cares about things like
 the big-O notation for program complexity.  Can't people just *look*
 at code and see how complex it is?!  And take things like the years of
 wasted effort computer scientists have put into taking data structures
 (like hashes and various kinds of trees) and extending them along
 various problem domains and requirements.  Real programmers don't
 waste their time with learning that junk.  What good did any of that
 ever do anyone?!
>>>
>>> It is my experience that in particular graduated (and in particular Phd)
>>> computer scientists don't waste their time _applying_ that junk.
>>
>> Question: do you have a degree in computer science?
>>
>> Since in my experience: people who talk about their experience with
>> graduated people often missed the boat themselves and think that reading
>> a book or two equals years of study.
>
> I have a degree in electrical engineering.  But that's similarly
> irrelevant.

Nah, it's not: your attitude towards people with a degree in computer
science agrees with what I wrote.

> That has not particularly helped my respect towards CS majors and PhDs
> in the function of programmers (and to be honest: their education is not
> intended to make them good programmers, but to enable them to _lead_
> good programmers).

I disagree. 

> That does not mean that I am incapable of analyzing, say quicksort and
> mergesort,

Oh, that's what I was not implying. I am convinced that quite some
people who do self-study can end up with better understanding of things
than people who do it for a degree. I have done both: I already was
programming in several languages before I was studying CS. And my
experience is that a formal study in CS can't compare to home study
unless you're really good and have the time and drive to read formal
books written on CS. And my experience is that most self-educaters don't
have that time.

On the other hand: some people I knew during my studies had no problem
at all with introducing countless memory leaks in small programs (and
turning off compiler warnings, because it gave so much noise...)

> Donald Knuth never studied computer science.

Yes, yes, and Albert Einstein worked at an office.

Those people are very rare. 

But my experience (see for plenty of examples: Slashdot) is that quite
some people who don't have a degree think that all that formal education
is just some paper pushing and doesn't count. While some of those who do
have the paper think they know it all. Those people who are right in
either group are a minority in my experience.

As for electrical engineering: done that (BSc) and one of my class mates
managed to connect a transformer the wrong way around twice. Yet he
had the highest mark in our class.

So in short: yes, self-study can make you good at something. But
self-study IMO is not in general a replacement for a degree. Someone who
can become great after self-study would excel at a formal study and
learn more. Study works best if there is competition and if there are
challenges. I still study a lot at home, but I do miss the challenges
and competition.

-- 
John Bokma   j3b

Blog: http://johnbokma.com/Facebook: http://www.facebook.com/j.j.j.bokma
Freelance Perl & Python Development: http://castleamber.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-22 Thread David Kastrup
John Bokma  writes:

> David Kastrup  writes:
>
>> John Passaniti  writes:
>>
>>> Amen!  All this academic talk is useless.  Who cares about things like
>>> the big-O notation for program complexity.  Can't people just *look*
>>> at code and see how complex it is?!  And take things like the years of
>>> wasted effort computer scientists have put into taking data structures
>>> (like hashes and various kinds of trees) and extending them along
>>> various problem domains and requirements.  Real programmers don't
>>> waste their time with learning that junk.  What good did any of that
>>> ever do anyone?!
>>
>> It is my experience that in particular graduated (and in particular Phd)
>> computer scientists don't waste their time _applying_ that junk.
>
> Question: do you have a degree in computer science?
>
> Since in my experience: people who talk about their experience with
> graduated people often missed the boat themselves and think that reading
> a book or two equals years of study.

I have a degree in electrical engineering.  But that's similarly
irrelevant.  I have a rather thorough background with computers (started
with punched cards), get along with about a dozen assembly languages and
quite a few other higher level languages.  I've had to write the BIOS
for my first computer and a number of other stuff and did digital
picture enhancement on DOS computers with EMM (programming 80387
assembly language and using a variant of Hartley transforms).

I have rewritten digital map processing code from scratch that has been
designed and optimized by graduated computer scientists (including one
PhD) to a degree where it ran twice as fast as originally, at the cost
of occasional crashes and utter unmaintainability.  Twice as fast
meaning somewhat less than a day of calculation time for medium size
data sets (a few 10 of data points, on something like a 25MHz 68020
or something).  So I knew the problem was not likely to be easy.  Took
me more than a week.  After getting the thing to compile and fixing the
first few crashing conditions, I got stuck in debugging.  The thing just
terminated after about 2 minutes of runtime without an apparent reason.
I spent almost two more days trying to find the problem before bothering
to even check the output.  The program just finished regularly.

That has not particularly helped my respect towards CS majors and PhDs
in the function of programmers (and to be honest: their education is not
intended to make them good programmers, but to enable them to _lead_
good programmers).

That does not mean that I am incapable of analyzing, say quicksort and
mergesort, and come up with something reasonably close to a closed form
for average, min, and max comparisons (well, unless a close
approximation is good enough, you have to sum about lg n terms which is
near instantaneous, with a real closed form mostly available when n is
special, like a power of 2).  And I know how to work with more modern
computer plagues, like the need for cache coherency.

So in short, I have a somewhat related scientific education, but I can
work the required math.  And I can work the computers.

> Oh, and rest assured, it works both ways: people who did graduate are
> now and then thinking it's the holy grail and no body can beat it with
> home study.
>
> Both are wrong, by the way.

Depends.  In my personal opinion, living close to the iron and being
sharp enough can make a lot of a difference.

Donald Knuth never studied computer science.  He more or less founded
it.  As a programmer, he is too much artist and too little engineer for
my taste: you can't take his proverbial masterpiece "TeX" apart without
the pieces crumbling.  He won't write inefficient programs: he has the
respective gene and the knowledge to apply it.  But the stuff he wrote
is not well maintainable and reusable.  Of course, he has no need for
reuse if he can rewrite as fast as applying an interface.

-- 
David Kastrup
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-21 Thread Steven D'Aprano
Oh, I am so going to regret getting sucked into this tarpit... oh 
well.


On Sat, 21 Aug 2010 09:58:18 -0700, Hugh Aguilar wrote:

> The
> following is a pretty good example, in which Alex mixes big pseudo-
> intellectual words such as "scintilla" with gutter language such as
> "turd" in an ungrammatical mish-mash

You say that like it's a bad thing.

Besides, scintilla isn't a "big pseudo-intellectual" word. It might seem 
so to those whose vocabulary (that's another big word, like "patronizing" 
and "fatuousness") is lacking, but it's really quite a simple word. It 
means "a spark", hence "scintillating", as in "he thinks he's quite the 
scintillating wit, and he's half right". It also means "an iota, a 
smidgen, a scarcely detectable amount", and if anyone can't see the 
connection between a spark and a smidgen, there's probably no hope for 
them.

Nothing intellectual about it, let alone pseudo-intellectual, except that 
it comes from Latin. But then so do well more half the words in the 
English language.

Anyway, I'm looking forward to hear why overuse of the return stack is a 
big reason why people use GCC rather than Forth. (Why GCC? What about 
other C compilers?) Me, in my ignorance, I thought it was because C was 
invented and popularised by the same universities which went on to teach 
it to millions of programmers, and is firmly in the poplar and familiar 
Algol family of languages, while Forth barely made any impression on 
those universities, and looks like line-noise and reads like Yoda. (And 
I'm saying that as somebody who *likes* Forth and wishes he had more use 
for it.) In my experience, the average C programmer wouldn't recognise a 
return stack if it poked him in the eye.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-21 Thread Brad
On Aug 21, 3:36 am, David Kastrup  wrote:
>
> I think there must be some programmer gene.  It is not enough to be able
> to recognize O(n^k) or worse (though it helps having a more exact rather
> than a fuzzy notion of them _if_ you have that gene).  

Some of the best minds in comp.lang.forth have a penchant for sarcasm
- one of the reasons I always read their posts. Maybe it gets lost on
the international crowd, but I love it.

-Brad
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-21 Thread John Bokma
David Kastrup  writes:

> John Passaniti  writes:
>
>> Amen!  All this academic talk is useless.  Who cares about things like
>> the big-O notation for program complexity.  Can't people just *look*
>> at code and see how complex it is?!  And take things like the years of
>> wasted effort computer scientists have put into taking data structures
>> (like hashes and various kinds of trees) and extending them along
>> various problem domains and requirements.  Real programmers don't
>> waste their time with learning that junk.  What good did any of that
>> ever do anyone?!
>
> It is my experience that in particular graduated (and in particular Phd)
> computer scientists don't waste their time _applying_ that junk.

Question: do you have a degree in computer science?

Since in my experience: people who talk about their experience with
graduated people often missed the boat themselves and think that reading
a book or two equals years of study.

Oh, and rest assured, it works both ways: people who did graduate are
now and then thinking it's the holy grail and no body can beat it with
home study.

Both are wrong, by the way.

-- 
John Bokma   j3b

Blog: http://johnbokma.com/Facebook: http://www.facebook.com/j.j.j.bokma
Freelance Perl & Python Development: http://castleamber.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-21 Thread Alex McDonald
On 21 Aug, 17:58, Hugh Aguilar  wrote:
> On Aug 21, 5:29 am, Alex McDonald  wrote:
>
> > On 21 Aug, 06:42, Standish P  wrote:
> > > Admittedly, I am asking a question that would be thought
> > > provoking to those who claim to be "experts" but these experts are
> > > actually very stingy and mean business people, most certainly worse
> > > than Bill Gates, only it did not occur to them his ideas and at the
> > > right time.
>
> > What surprises may is that anyone bothered to answer, as your question
> > was neither "thought provoking" nor in need of attention from an
> > expert. Their generosity in the face of so much stupidity stands out
> > as remarkable.
>
> I wouldn't call the OP "stupid," which is just mean-spirited.

Perhaps I'm just getting less forgiving the older I get, or the more I
read here. The internet is a fine resource for research, and tools
like google, archivx and so on are easy to access and take but a
little effort to use.

> That is
> not much of a welcome wagon for somebody who might learn Forth
> eventually and join our rather diminished ranks.

I care neither to be included in your "diminished ranks", nor do I
take much regard of popularity as you define it. Standish P doesn't
want to join anything; he (like you) has an agenda for yet another
club with a membership of one.

> Lets go with "over-
> educated" instead! I thought that his question was vague. It seemed
> like the kind of question that students pose to their professor in
> class to impress him with their thoughtfulness, so that he'll forget
> that they never did get any of their homework-assignment programs to
> actually work.

It didn't work. He hasn't done any homework, neither do you, and it
shows.

> I yet maintain that writing programs is what
> programming is all about.

You remind me of those that would build a house without an architect,
or fly without bothering to study the weather.

>
> I see a lot of pseudo-intellectual blather on comp.lang.forth. The
> following is a pretty good example, in which Alex mixes big pseudo-
> intellectual words such as "scintilla"

"Scintilla" gets about 2,080,000 results on google; "blather" gets
about 876,000 results. O Hugh, you pseudo-intellectual you!

> with gutter language such as
> "turd"

About 5,910,000 results. It has a long history, even getting a mention
in the Wyclif's 13th century bible.

> in an ungrammatical mish-mash --- and defends the overuse of
> the return stack for holding temporary data as being readable(?!):


I did? Where? You're making stuff up. Again.


> http://groups.google.com/group/comp.lang.forth/browse_thread/thread/4...
>
> On Jul 23, 4:43 pm, Alex McDonald  wrote:
>
> > Whereas yours contained several tens, and nearly every one of them is
> > wrong. Hugh, do you actually have any evidence -- even a scintilla --
> > that supports this log winded opinions-as-fact post? Take any of the
> > statements you make, and demonstrate that you can justify it.
> > Reminding us that you said it before doesn't count.
>
> > Start with this turd of an assertion and see if you can polish it;
> > "Most of the time, when Forth code gets really ugly, it is because of
> > an overuse of >R...R> --- that is a big reason why people use GCC
> > rather than Forth."
>

Something you never did address, probably because the statement you
made is just another symptom of Aguilar's Disease; presenting as fact
an opinion based on personal experience, limited observation and no
research.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-21 Thread Hugh Aguilar
On Aug 21, 5:29 am, Alex McDonald  wrote:
> On 21 Aug, 06:42, Standish P  wrote:
> > Admittedly, I am asking a question that would be thought
> > provoking to those who claim to be "experts" but these experts are
> > actually very stingy and mean business people, most certainly worse
> > than Bill Gates, only it did not occur to them his ideas and at the
> > right time.
>
> What surprises may is that anyone bothered to answer, as your question
> was neither "thought provoking" nor in need of attention from an
> expert. Their generosity in the face of so much stupidity stands out
> as remarkable.

I wouldn't call the OP "stupid," which is just mean-spirited. That is
not much of a welcome wagon for somebody who might learn Forth
eventually and join our rather diminished ranks. Lets go with "over-
educated" instead! I thought that his question was vague. It seemed
like the kind of question that students pose to their professor in
class to impress him with their thoughtfulness, so that he'll forget
that they never did get any of their homework-assignment programs to
actually work. I yet maintain that writing programs is what
programming is all about.

I see a lot of pseudo-intellectual blather on comp.lang.forth. The
following is a pretty good example, in which Alex mixes big pseudo-
intellectual words such as "scintilla" with gutter language such as
"turd" in an ungrammatical mish-mash --- and defends the overuse of
the return stack for holding temporary data as being readable(?!):
http://groups.google.com/group/comp.lang.forth/browse_thread/thread/4b9f67406c6852dd/0218831f02564410

On Jul 23, 4:43 pm, Alex McDonald  wrote:
> Whereas yours contained several tens, and nearly every one of them is
> wrong. Hugh, do you actually have any evidence -- even a scintilla --
> that supports this log winded opinions-as-fact post? Take any of the
> statements you make, and demonstrate that you can justify it.
> Reminding us that you said it before doesn't count.
>
> Start with this turd of an assertion and see if you can polish it;
> "Most of the time, when Forth code gets really ugly, it is because of
> an overuse of >R...R> --- that is a big reason why people use GCC
> rather than Forth."
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-21 Thread Alex McDonald
On 21 Aug, 06:42, Standish P  wrote:
> On Aug 20, 3:51 pm, Hugh Aguilar  wrote:
>
>
>
> > On Aug 18, 6:23 pm, Standish P  wrote:
>
> > > On Aug 17, 6:38 pm, John Passaniti  wrote:
>
> > > > You asked if Forth "borrowed" lists from Lisp.  It did not.  In Lisp,
> > > > lists are constructed with pair of pointers called a "cons cell".
> > > > That is the most primitive component that makes up a list.  Forth has
> > > > no such thing; in Forth, the dictionary (which is traditionally, but
> > > > not necessarily a list) is a data structure that links to the previous
> > > > word with a pointer.  
>
> > > Would you show me a picture, ascii art or whatever for Forth ? I know
> > > what lisp lists look like so I dont need that for comparison. Forth
> > > must have a convention and a standard or preferred practice for its
> > > dicts. However, let me tell you that in postscript the dictionaries
> > > can be nested inside other dictionaries and any such hiearchical
> > > structure is a nested associative list, which is what linked list,
> > > nested dictionaries, nested tables are.
>
> > You can see an example of lists in my novice package (in the list.4th
> > file):http://www.forth.org/novice.html
> > Also in there is symtab, which is a data structure intended to be used
> > for symbol tables (dictionaries). Almost nobody uses linked lists for
> > the dictionary anymore (the FIG compilers of the 1970s did, but they
> > are obsolete).
>
> > I must say, I've read through this entire thread and I didn't
> > understand *anything* that *anybody* was saying (especially the OP).
>
> You didnt understand anything because no one explained anything
> coherently.

It indicates that you're asking a question that *you don't
understand*.

I'm continually amazed that people come to Usenet, wikis, websites and
other fora and ask questions that even the most basic of research (and
a bit of care with terminology aka "using the right words") would show
to be confused. A quick scan of the available literature on garbage
collection and stacks, starting with the fundamentals, would surely
show you what you need to know.

> Admittedly, I am asking a question that would be thought
> provoking to those who claim to be "experts" but these experts are
> actually very stingy and mean business people, most certainly worse
> than Bill Gates, only it did not occur to them his ideas and at the
> right time.
>

What surprises may is that anyone bothered to answer, as your question
was neither "thought provoking" nor in need of attention from an
expert. Their generosity in the face of so much stupidity stands out
as remarkable.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-21 Thread David Kastrup
John Passaniti  writes:

> Amen!  All this academic talk is useless.  Who cares about things like
> the big-O notation for program complexity.  Can't people just *look*
> at code and see how complex it is?!  And take things like the years of
> wasted effort computer scientists have put into taking data structures
> (like hashes and various kinds of trees) and extending them along
> various problem domains and requirements.  Real programmers don't
> waste their time with learning that junk.  What good did any of that
> ever do anyone?!

It is my experience that in particular graduated (and in particular Phd)
computer scientists don't waste their time _applying_ that junk.  They
have learnt to analyze it, they could tell you how bad their own
algorithms are (if they actually bothered applying their knowledge), but
it does not occur to them to replace them by better ones.  Or even
factor their solutions in a way that the algorithms and data structures
are actually isolated.

I think there must be some programmer gene.  It is not enough to be able
to recognize O(n^k) or worse (though it helps having a more exact rather
than a fuzzy notion of them _if_ you have that gene).  You have to fear
it.  It has to hurt.  You need to feel compassion with the CPU.  It's
not enough to sit there in your easychair, occasionally sucking on your
pipeline and listen to its story about a hard realtime youth and its
strained connection to its motherboard.  When it stops, you have to see
its benchmarks and feel their pain in your own backplane.

-- 
David Kastrup
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-21 Thread Elizabeth D Rather

On 8/20/10 7:42 PM, Standish P wrote:
...

Admittedly, I am asking a question that would be thought
provoking to those who claim to be "experts" but these experts are
actually very stingy and mean business people, most certainly worse
than Bill Gates, only it did not occur to them his ideas and at the
right time.


The problem as I see it is that you're asking complex questions in a 
forum that, at best, supports simple answers.  The information you're 
looking for exists, on the net, free.  There are free pdfs of manuals on 
Forth available with program downloads from FORTH, Inc., MPE, Gforth, 
and other sources, as well as some inexpensive books.  But you have to 
be willing to make the investment to download and read them, because the 
answers to your questions are not simple one-liners that you can get 
from newsgroups, and the folks in newsgroups are not prepared to host 
computer science seminars -- many of us are working programmers, 
engineers, and project managers who have limited time to spend here.  If 
you're willing to invest your time enough to investigate some of these 
sources, and still have questions, we'll be happy to try to help.


Cheers,
Elizabeth

--
==
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==
--
http://mail.python.org/mailman/listinfo/python-list


Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?

2010-08-20 Thread Standish P
On Aug 20, 3:51 pm, Hugh Aguilar  wrote:
> On Aug 18, 6:23 pm, Standish P  wrote:
>
>
>
>
>
> > On Aug 17, 6:38 pm, John Passaniti  wrote:
>
> > > You asked if Forth "borrowed" lists from Lisp.  It did not.  In Lisp,
> > > lists are constructed with pair of pointers called a "cons cell".
> > > That is the most primitive component that makes up a list.  Forth has
> > > no such thing; in Forth, the dictionary (which is traditionally, but
> > > not necessarily a list) is a data structure that links to the previous
> > > word with a pointer.  
>
> > Would you show me a picture, ascii art or whatever for Forth ? I know
> > what lisp lists look like so I dont need that for comparison. Forth
> > must have a convention and a standard or preferred practice for its
> > dicts. However, let me tell you that in postscript the dictionaries
> > can be nested inside other dictionaries and any such hiearchical
> > structure is a nested associative list, which is what linked list,
> > nested dictionaries, nested tables are.
>
> You can see an example of lists in my novice package (in the list.4th
> file):http://www.forth.org/novice.html
> Also in there is symtab, which is a data structure intended to be used
> for symbol tables (dictionaries). Almost nobody uses linked lists for
> the dictionary anymore (the FIG compilers of the 1970s did, but they
> are obsolete).
>
> I must say, I've read through this entire thread and I didn't
> understand *anything* that *anybody* was saying (especially the OP).

You didnt understand anything because no one explained anything
coherently. Admittedly, I am asking a question that would be thought
provoking to those who claim to be "experts" but these experts are
actually very stingy and mean business people, most certainly worse
than Bill Gates, only it did not occur to them his ideas and at the
right time.


> I really recommend that people spend a lot more time writing code, and a
> lot less time with all of this pseudo-intellectual nonsense.

You have to have a concept to write code.

> This
> whole thread (and most of what I see on C.L.F. these days) reminds me
> of the "dialectic method" of the early Middle Ages --- a lot of talk
> and no substance.
>
> Write some programs! Are we not programmers?- Hide quoted text -
>
> - Show quoted text -

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   >