I can certainly bore you with specifics!
We have growing index of URLs, on discovery a url is standardized into a
"canonical" and stored into two tables:
class UrlRaw(DeclaredTable):
__tablename__ = 'url_raw'
id = Column(Integer, nullable=False, primary_key=True)
u
my immediate reactions are:
1. where are these dupes coming from?
2. finding / correcting dupe rows should be generalizable, if you could
show more specifics
On 05/31/2017 12:59 PM, Jonathan Vanasco wrote:
I doubt this exists, but I figure it is worth asking...
We recently had some schema c
I doubt this exists, but I figure it is worth asking...
We recently had some schema changes, and while the schema migration ran
fine we have a lot of record migrations to do.
We have a scenario like this:
Object A now has duplicates
Object A must be de-deduplicated, preferring the earli
For some reason Pandas is returning NULL values from Oracle into "nan"
instead of "NaN" or "None"
so I have to check for this and change it to "None" or else SQLAlchemy
inserts a "~" instead of NULL
to my Oracle database.
What's up with that? Anybody else have this happen?
if (
On 05/31/2017 06:05 AM, JinRong Cai wrote:
Hi , All,
I am working on the NOVA with sqlalchemy.
And one problem is that I did not find any different with the DB
configuration with max_pool_size/max_overflow:
example: /tmp/nova.conf
[database]
connection =
postgresql://openstack:openstack@x
Hi , All,
I am working on the NOVA with sqlalchemy.
And one problem is that I did not find any different with the DB
configuration with max_pool_size/max_overflow:
example: /tmp/nova.conf
[database]
connection =
postgresql://openstack:openstack@:5432/nova?application_name=nova
max_pool_siz