Ok, i have a very rough fix. (Thanks Leonel for your time) I neglected to 
say that this is all done by the scheduler. What I did was insert 
time.sleep(5) between the commits, as well as inserting a half the 
attachment size first. This seems to work (I still need to optimize the 
wait times):

            for iix in range(3):
                saved = True
                try:
                    time.sleep(5)
                    db.commit()
                    time.sleep(5)
                    if (size_of_attachment > 500000) and (iix == 0):
                        # save a dummy
                        dummy_string = '.' * size_of_attachment/2
                        db(db.e_attachment.id == attachment_id).update(\
                            e_file_pt = 
db.e_attachment.e_file_pt.store(stream, fn),
                            e_file = dummy_string
                            )
                        saved = False
                    else:
                        db(db.e_attachment.id == attachment_id).update(\
                            e_file_pt = 
db.e_attachment.e_file_pt.store(stream, fn),
                            e_file = mailpart.get_payload()
                            )
                    time.sleep(10)
                    db.commit()
                except:
                    for iiix in range(5):
                        # wait 5 seconds and try again
                        try:
                            time.sleep(5)
                            db.commit()
                            time.sleep(5)
                            db.errors.insert(description = "Trying to 
insert attachment", err_time = datetime.datetime.today())
                            time.sleep(5)
                            db.commit()
                            break
                        except:
                            pass
                    saved = False
                if saved:
                    return
            error_msg = "Couldnt save {0}th attachment of EUID {1}: {2}. 
charset:{3} type
                {4}".format(ix,e_id,fn,mailpart.charset,mailpart.type)
            insert_error(error_msg)
            raise Exception(error_msg)


On Tuesday, August 22, 2017 at 5:41:38 PM UTC-7, Ben Lawrence wrote:
>
>  
> HI
> I am downloading emails and inserting their attachments into a postgres 
> database.
>
> db.define_table('e_attachment',
>     Field('e_mail_id','reference e_mail',required=True),
>     Field('e_file_name','string', length=SHORT_STRING),
>     Field('e_file_pt','upload',uploadfield='e_file'),
>     Field('e_file', 'blob') )
>
> This works.
> Mostly.
> But sometimes the database seems to seize up and for some reason a new 
> record cannot be inserted.
> Even for something simple like:
> db.define_table('errors',
>     Field('description','text',length = 1024))
>
> The error originates with a blob insertion of a microsoft office file of 
> about 1.1M in length.
>
> try:
>     db.e_attachment.insert(...)
> except:
>     error_msg="..."
>     db.errors.insert(error_msg)
>
> Even trying to insert into the errors table produces an error.
> The traceback for both insertions look like this:
>
>   File "/home/www-data/web2py/gluon/packages/dal/pydal/objects.py", line 
> 734, in insert
>     ret = self._db._adapter.insert(self, row.op_values())
>   File "/home/www-data/web2py/gluon/packages/dal/pydal/adapters/base.py", 
> line 486, in insert
>     raise e
> InternalError: current transaction is aborted, commands ignored until end 
> of transaction block
>
> Has anyone experienced this too?
>
> P.S.   Version is
> 2.15.3-stable+timestamp.2017.08.07.07.32.04
> (Running on nginx/1.10.3, Python 2.7.12)
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to