Did you regenerate your migration script after adding the hooks?
I would start by putting some print statements in the include_name hook to
see how it is being called. You should see it called for every object in
the database. You can then decide which names to return True for, and which
ones to
Thank you Simon for your response. yes i am using autogenerate feature. i
tried with include_object and include_name hooks. but it won't work for me.
after adding hook also alembic touches to existing tables..
if you send the code snipet for env.py file.. that will really help me..
Thank you.
If I understand correctly, you used Alembic's "autogenerate" feature to
create your migration script. This feature compares the table definitions
in your application with the table definitions in the database and then
generates a script to alter the database to match your application.
You can
When you select in the database ui tool, you are just displaying raw data.
When you select within your code snippets above, Python is creating pandas'
DataFrame objects for the results.
These two concepts are not comparable at all. Converting the SQL data to
Python data structures in Pandas
I am working on fastapi, in which i have created models and i am inteded to
create the table in sql server database, however when i am runing my first
migration, alembic detected removal of existing table which are not belongs
to my work. Can somebody help how i can create my tables and avoid
Hello Phil,
i tested both and without printing the result.
table_df = pd.read_sql_query(''SELECT, engine)
#print(table_df)
#query = "SELECT"
#for row in conn.execute(query).fetchall():
#pass
both have nearly the same runtime. So this is not my problem. And yes, they
are the same queries
> On Jun 8, 2022, at 8:29 AM, Trainer Go wrote:
>
> When im using pandas with pd.read_sql_query()
> with chunksize to minimiza the memory usage there is no difference between
> both runtimes..
Do you know that, or is that speculation?
>
> table_df = pd.read_sql_query('''select , engine,
When im using pandas with pd.read_sql_query()
with chunksize to minimiza the memory usage there is no difference between
both runtimes..
table_df = pd.read_sql_query('''select , engine, chunksize = 3)
for df in table_df:
print(df)
the runtime is nearly the same like 5 minutes
thank you Philip,
I will test it today.
Greetings Manuel
Philip Semanchuk schrieb am Dienstag, 7. Juni 2022 um 17:13:28 UTC+2:
>
>
> > On Jun 7, 2022, at 5:46 AM, Trainer Go wrote:
> >
> > Hello guys,
> >
> > Im executing 2 queries in my python program with sqlalchemy using the
> pyodbc