On Fri, Dec 11, 2020 at 2:46 PM <aurelien.lambert...@gmail.com> wrote: > > Here is an example of how I use it to build an arbitrary long SQL request > without having to pay for long intermediate strings, both in computation on > memory. > > from itertools import chain #, join > def join(sep, iterable): > notfirst=False > for i in iterable: > if notfirst: > yield sep > else: > notfirst=True > yield i > > table = 'mytable' > columns=('id', 'v1', 'v2') > values = [(0, 1, 2), (3, 4, 5), (6, 7, 8)] > request = ''.join(chain( > ('INSERT INTO ', table, '('), > join(', ', columns), > (') VALUES (',), > chain.from_iterable(join(('), (',), (join(', ', ('%s' for v in > value)) for value in values))), > (') ON DUPLICATE KEY UPDATE ',), > chain.from_iterable(join((', '), ((c, '=VALUES(', c, ')') for c in > columns))), > )) > args = list(chain.from_iterable(values)) > > print(request) > > INSERT INTO mytable(id, v1, v2) VALUES (%s, %s, %s), (%s, %s, %s), (%s, > %s, %s) ON DUPLICATE KEY UPDATE id=VALUES(id), v1=VALUES(v1), v2=VALUES(v2) > > I often had such cases, but ended up using the more costy str.join .
Is it really more costly? With strings the size of SQL queries (keeping in mind that these strings (correctly) contain no actual data, just placeholders), I doubt you'll see any significant performance hit from this. Also, I would be VERY surprised if the cost of in-memory string manipulation exceeds the cost of an actual database transaction. But more importantly: Is it any more readable? What you have there is pretty opaque. Is the str.join version worse than that? If not, I'd just stick with str.join. ChrisA _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/M7BYD2YL3U52NPXRWJ5VVDXDBKAO7FGW/ Code of Conduct: http://python.org/psf/codeofconduct/