t; description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/4oPfuzAjw48/unsubscribe.
> To unsubscribe from this g
Thank you Philip for your suggestion.
On Thursday, March 30, 2023 at 9:38:08 PM UTC+3 Philip Semanchuk wrote:
>
>
> > On Mar 30, 2023, at 2:32 PM, James Paul Chibole
> wrote:
> >
> > Hi everyone, I am trying to retrieve deceased persons who died in the
> curr
Hi everyone, I am trying to retrieve deceased persons who died in the
current month but the output gives no result. Here is my code with query
done in Python Flask:
from datetime import datetime from
sqlalchemy import func
arameters and internal attributes the same.
>
> On Thu, May 6, 2021, at 11:52 AM, Steven James wrote:
>
> I originally came to that conclusion, and I agree that replacing it with a
> tuple does fix it, but I still can't explain why using a different
> parameter name also fixes it.
t; resolve:
>
>
> class AltType(TypeDecorator):
> impl = Unicode(255)
>
> def __init__(self, choices):
> self.choices = tuple(choices)
> super(AltType, self).__init__()
>
>
> https://github.com/sqlalchemy/sqlalchemy/issues/6436
>
>
>
> On Th
ompile_w_cache(
File "...\\lib\site-packages\sqlalchemy\sql\elements.py", line 531, in
_compile_w_cache
compiled_sql = compiled_cache.get(key)
File "...\\lib\site-packages\sqlalchemy\util\_collections.py", line 918,
in get
item = dict.get(self, key, default)
Ty
(),
non_existent_id
)
(the .as_scalar() is needed to make coalesce() happy)
On Thursday, 19 November 2020 at 14:36:12 UTC-5 Mike Bayer wrote:
>
>
> On Thu, Nov 19, 2020, at 2:03 PM, Steven James wrote:
>
> In general I have set up foreign keys using the following pat
In general I have set up foreign keys using the following pattern:
p = session.query(Parent).filter(Parent.natural_key == key).one()
new_child = Child(parent=p)
This makes a SELECT and an INSERT and is fine, but it gets a little
cumbersome when I want to create a new child object with
Question 1:
I don't think there is a good fancy way of doing this built in to
SQLAlchemy. With your constraint of using a stored proc for inserts (we
have a similar constraint where I work), one way around the
multiple-command overhead would be to do a bulk insert to a temporary
"parameters"
session.add over to the bulk API.
Of course, as you say, you can do more low level SQL calls to get it even
faster, but then you run into a bunch of other issues.
James
On Sun, Apr 19, 2020, 12:46 PM Ben wrote:
> I hope this is the right place for this... I need to load large files into
&
.
In all cases I'm not playing with the instance state. I'm essentially
manually stamping primary keys on detached objects, so I'm guessing
SQLAlchemy thinks it needs to insert? Any suggestions for how I can proceed?
Thanks!
James
--
SQLAlchemy -
The Python SQL Toolkit and Object Relatio
> However I cannot catch for this error, I can only catch for
"sqlalchemy.exc.ProgrammingError".
Why is that?
James
On Wed, Dec 4, 2019 at 6:03 PM Jonathan Vanasco
wrote:
>
> Personally, I would handle the check like this:
>
> ERRORS_UNDEFINED_TABLE = (psyco
(sm.get_opcodes())
print(f'similarity: {sm.ratio()}')
assert sm.ratio() == 1 # example to ensure results are equivalent
assert sm.ratio() == 1, sm.get_opcodes() # pytest syntax to show the
opcodes if the assertion fails
Steven James
On Friday, 29 November 2019 09:13:23 UTC-5, sumau wrote:
>
> Hello
Wanted to note: this fix seems to be required to use composite keys with
sqlite / selectin as well.
On Thursday, 27 June 2019 15:53:44 UTC-4, Steven James wrote:
>
> This Worked!
>
> @compiles(BinaryExpression, 'ibm_db_sa')
> def _comp_binary(element, compiler, **kwar
stigating
> memory usage when loading data using memory_profiler and would be
> interested to find out about the best approach
>
> On Thu, 14 Nov 2019, 17:16 James Fennell, wrote:
>
>> Hi all,
>>
>> Just sharing some perf insights into the bulk operation function
>
benchmark that to
quantify that.
Thought it was interesting. I wonder would it be worth adding to the docs
on bulk_insert_mappings? Given that function is motivated by performance,
it seems it might be relevant.
James
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqla
Do you have a trigger or a constraint that references that column?
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve
Perhaps I have been up too many hours, but my syntax foo is fizzling.
Given the following class, I want to compute the string length of
"position" instead of storing it as another attribute which can get out of
sync. eg.
class Position(Base):
__tablename__ = 'position'
id =
o touch it again now that it is working.
Thanks!
On Thursday, 27 June 2019 15:03:01 UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Jun 27, 2019, at 2:11 PM, Steven James wrote:
>
> Currently, `selectin` loading with composite keys works for me on MySQL
> and SQLite. Th
Sorry... made a typo there... the desired syntax is:
`SELECT * FROM table WHERE (table.a, table.b) IN (VALUES (?, ?), (?,?))`
On Thursday, 27 June 2019 14:11:16 UTC-4, Steven James wrote:
>
> Currently, `selectin` loading with composite keys works for me on MySQL
> and SQLite. The docu
to implement this without a core change? I'm wondering if
I can override the normal operation of in_() using a custom dialect or
custom default comparator.
Thanks,
Steven James
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code
I think Mike's suggestion was to construct the raw SQL string you want,
then reverse engineer to get the correct SQL Alchemy code, which you can
then use with your different models. For complicated SQL logic I think this
is a good practice in general.
You current question seems like a general SQL
Thanks for the explanation Mike! Seeing it now, I actually think there’s a
decent reason to want the current backerefs:
My understanding is that with session.merge in SQL Alchemy it’s possible to
draw a very clean line between entities that are persisted (or about to be
persisted on the next
It seems to be related to the cascades happening recursively. The merge
cascade goes from the tassel thread to the head, and then again down from
the head to the tassel thread - which is kind of strange, I would expect
the cascades to only visit each node in the object graph at most once. The
Okay let me answer my own question. The problem is that my parent-child
relationship does not have the delete-orphan cascade. So when I set the new
children, the old child_2 loses its parent (as is expected, because it's no
longer a child) and then there's an error because the DB has a not null
Oooo the problem is not what I thought.
The problem is that in my 'new data' there is no new_child_2. This is an
expected case, as sometimes children disappear, so will update the post.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To
I have a parent child relationship which I construct from a data feed. At
the time of constructing the object graph I don't have access to the
primary keys of the entities, so I build up the object graph by using the
relationship attributes. My understanding was that I could perform a
To follow this up - what would be the best way to get these extra dragons
in? I would be happy to submit a PR or something if that is easier.
On Friday, September 14, 2018 at 10:32:52 AM UTC+2, ja...@cryptosense.com
wrote:
>
> Thanks for the help - I had missed the "copy vs modifying in place"
Thanks for the help - I had missed the "copy vs modifying in place"
difference between hybrid_method and hybrid_property.
I think adding another dragon would be helpful here, probably located in
Update: I have just found
http://docs.sqlalchemy.org/en/latest/changelog/migration_12.html#hybrid-attributes-support-reuse-among-subclasses-redefinition-of-getter
which documents that getters and setters must have the same name as the
original expression.
Can I just check that it is expected
seem to
work in the same way, so I am not sure why the renaming matters to the
property and not the method. Is this expected behaviour?
Thanks,
James
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE
Just a quick followup...
Thanks again for the help/advice. I did what you suggested, and the whole
query (with bulk_update_mappings) takes .16 seconds to return a result set
of 7800 records or so.
That's up from ~1.2 seconds it took before I did the optimizations.
--
SQLAlchemy -
The Python
query on the id or something.
On Friday, August 10, 2018 at 4:21:32 PM UTC-5, James Couch wrote:
>
> I think I see what you mean. Do an inline query/update, maybe just query
> by primary index for speed. I guess that won't add too much overhead, I'll
> give it a shot.
>
> On
I think I see what you mean. Do an inline query/update, maybe just query by
primary index for speed. I guess that won't add too much overhead, I'll
give it a shot.
On Friday, August 10, 2018 at 1:43:51 PM UTC-5, Mike Bayer wrote:
>
> You need to copy the keyedtuples into some other data
On Friday, August 10, 2018 at 4:03:06 PM UTC-5, Jonathan Vanasco wrote:
>
>
> A quick background on Mike's short answer... Tuples are immutable lists in
> Python, and "KeyedTuple" should indicate that you can't change the values.
> They're just a handy result storage object, not an ORM object
Hey all. Long time lurker, first time poster.
I'm using sqlalchemy ORM. We have a fairly decent sized data set, and one
table has a pretty large number of columns, some of them with foreignkeys.
I found that limiting a query to specific columns speeds up the time it
takes to come back with a
Found it!
I wasn't using the $AIRFLOW_HOME environment variable (I didn't think it
relied on it).
As such
airflow initdb
Must've been using it's own ariflow.cfg file.Not the one in /airflow.
On Friday, 10 November 2017 07:31:54 UTC, james...@netnatives.co.uk wrote:
>
> Hello.
>
&g
Hello.
I'm using the SQL Cloud Proxy on a Compute engine VM instance. I'm then
configuring Airflow (which uses SQL Alchemy).
I've setup a unix socket like this:
/opt/cloud_sql_proxy/cloud_sql_proxy
-instances=myproject:europe-west1:airflowinstance -dir=/cloudsql &
I can connect to the Cloud
Thanks Mike for your response.
The query is run against a staging db and the table only contains some 500
records.
But I will check the query as you have suggested to see what is going on.
Cheers
>
>
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
>
>
>
Thanks for your reply Simon.
- I am using Postgresql database
- Running the SQL generated by SQL Alchemy in Postgres also hangs.
- There is no traceback.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please
Hi All,
I've run into a odd problem, where calling the count function hangs my code
indefinitely. The odd thing is it was working until recently, so I'm a
little confused.
customer = session.query(Customer).filter(Customer.phone_number.contains([
message['metadata']['MIN']]))
? Is the total
count cached somewhere in a sqlalchemy session after *select
SQL_CALC_FOUND_ROWS* runs? Can I control where it gets cached in my code?
Thanks!
-james
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from
ransaction_accounting=False (with small bugfixes apparently) as well
> as the use of an external transaction.
The only requirement is that attributes and instances that were dirty before
the transaction began must be dirty after a rollback. I’m not concerned about
instance attributes being rolled back
I have successfully installed SQLAlchemy 1.0.9, & can enable foreign key
constraint support on each connection by hooking the event as specified in
the following:
http://docs.sqlalchemy.org/en/latest/dialects/sqlite.html
SQLite also allows pragma settings to be queried in the command-line shell
Is this actively being worked on?
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to
Michael,
Thanks - sorry to have wasted your time. It seems I gave up on Googling my
stack trace too soon.
Thanks,
Evan James
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send
not
worried too much about the error itself, but I thought I should post it
here in case it's a symptom of something I should be worrying about.
Thanks,
Evan James
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop
using the backref keyword. We
missed that the backref argument is responsible for the event listeners as
well as for creating the relationship on the other model.
Adding the back_populates argument to the model declarations fixes our
issue.
Thanks,
Evan James
--
You received this message
function identically in
declarative syntax as they did in reflective syntax? We thought we had
migrated mapping styles in a way that wouldn't change anything, but here we
are. What are we missing?
Thanks,
Evan James
--
You received this message because you are subscribed to the Google Groups
Using a scoped session with a session generator and I didn't want
expire_on_commit to be False for everything, so setting it using the
Session constructor wouldn't work properly. If a session was created prior
to the one that needed that flag, it'd give me a ProtocolError since it
couldn't
The application I'm working on operates over extremely large datasets, so
I'm using the query windowing from here
(https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/WindowedRangeQuery)
to break it into manageable chunks. The query window is usually around 10k
rows, after which it
I have a CSV file with lots of redundant data which models many-to-many
relationships. I'm needing to scrub the data as it is inserted into the
database littered with unique constraints. Is there a way to insert the
data once without querying for each object before inserting?
I'm sure this is a
A couple of questions:
I'm writing an application using concurrent.futures (by process). The
processes themselves are fairly involved - not simple functions. I'm using
scoped_sessions and a context manager like so:
# db.py
engine = create_engine(sqlalchemy_url)
Session =
I wasn't going to bother, but I had a look at doing this just out of
curiosity, and these were the results:
executemany():
Inserting 424 entries: 0.3362s
Inserting 20,000 segments: 14.01s
COPY:
Inserting 425 entries: 0.04s
Inserting 20,000 segments: 0.3s
So a pretty massive boost. Thanks :)
spent in Python. That said, if you have
any tips for improvements I'd be all ears.
Thanks for the help!
On Monday, 24 March 2014 09:19:25 UTC+8, Michael Bayer wrote:
On Mar 23, 2014, at 11:33 AM, James Meneghello
muro...@gmail.comjavascript:
wrote:
I'm having a few issues with unique
2014 14:40:52 UTC+8, James Meneghello wrote:
Thanks for the quick reply!
This seems to work pretty well. I took out the batching (as it's already
batched at a higher level) and modified it to suit the insertion of
children as well (and reducded the unique to a single field
That's effectively what I'm doing now. I'm not sure there's much I can
speed up at this point - the SELECTs take about 0.05s, it's just the
INSERTs taking a bulk of the time - 11-15s depending on the number of rows.
That said, I'm still running on development and there'll be a significant
I'm having a few issues with unique constraints and bulk inserts. The
software I'm writing takes data from an external source (a lot of it,
anywhere from 1,000 rows per minute to 100-200k+), crunches it down into
its hierarchy and saves it to the DB, to be aggregated in the background.
The
with Array - however I had to
bring the project back over again because of speed issues - it was taking
far too long to do a set up and tear down of the DB for each test. It looks
like your solution will be much quicker than our previous one because of
your strategy with transactions.
Cheers,
James
On Wed, Apr 17, 2013 at 2:59 PM, Michael Bayer mike...@zzzcomputing.comwrote:
James Hartley jjhart...@gmail.com writes:
Is it possible to map Table instances back to classes defined through
declarative_base()?
the typical form is:
Base = declarative_base()
some_table = Table
Starting with the Wiki article on implementing views:
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/Views
Is it possible to map Table instances back to classes defined through
declarative_base()? I'm using SQLAlchemy 0.7.1.
Thanks.
--
You received this message because you are subscribed
On Wed, Apr 17, 2013 at 6:20 AM, Lele Gaifax l...@metapensiero.it wrote:
James Hartley jjhart...@gmail.com writes:
Is it possible to map Table instances back to classes defined through
declarative_base()?
...I assume you are asking whether you can map a view onto a
Python class using
-to-many, I don't see the use of relationship() here, you'd
likely find it easier to use rather than assigning primary key identities
to foreign key attributes directly:
http://docs.sqlalchemy.org/en/rel_0_8/orm/tutorial.html#building-a-relationship
On Apr 3, 2013, at 2:49 PM, James Hartley jjhart
I have implemented a (simplified) one-to-many relationship which works, but
I suspect I am reimplementing functionality in a suboptimal fashion which
is already done by SQLAlchemy. The following short example:
8---
#!/usr/bin/env python
import datetime
from sqlalchemy
I put a rework of the code posted by Bo into a package
https://pypi.python.org/pypi/vertica-sqlalchemy/0.1
Selects, joins, table introspection works. Let me know if you can use it.
Does anyone have an email for Bo so I can attribute him and check the
license?
thanks,
James
On Saturday, 16
Embarrassingly, I'm gotten lost in calling SQL functions in SQLAlchemy
0.7.1.
I can boil the problem down to the following table structure:
CREATE TABLE words (
id INTEGER NOT NULL,
timestamp DATETIME NOT NULL,
word TEXT NOT NULL,
PRIMARY KEY (id),
UNIQUE
I have created a feature request ticket for MySQLdb to add fractional
second support:
http://sourceforge.net/tracker/?func=detailaid=3545195group_id=22307atid=374935
Currently, I am still using my patched version of MySQLdb/times.py however
I did notice a slight formatting issue with my
, minutes, seconds,
microseconds)
Thank you for your help with the SQLalchemy side of things, redefining how
the DDL is emitted for the type and whatnot. Hopefully we can see these
changes in future releases of the 0.7 series.
--James
--
You received this message because you are subscribed
passing the microsecond value onto the DBAPI. After
your workaround, this seems to have confirmed that I am having a problem
with my DBAPI's (which I think is MySQLdb) communication with the db.
Please let me know if you have anymore ideas. Thank you for your
suggestions.
-James
--
You
('meta_timings', metadata,
...
Column('elapsed', FracTime, nullable=False),
...)
--James
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To view this discussion on the web visit
https
Support,
I recently updated our MySQL database to version 5.6.5 with hopes of
using the newly added fractional second support for the Time datatype.
Using SQLalchemy version 0.6.5 to create our table definitions, I add
the fractional second percision paramater to the time type as shown in
the
On Wed, Oct 26, 2011 at 10:15 AM, Michael Bayer mike...@zzzcomputing.comwrote:
On Oct 26, 2011, at 1:04 PM, James Hartley wrote:
On Wed, Oct 26, 2011 at 2:22 AM, Stefano Fontanelli
s.fontane...@asidev.com wrote:
Hi James,
you cannot define two mapper properties that use the same name
I suspect this is user error, but I am not ferreting out my mistake.
I'm porting some older code to SQLAlchemy 0.71 on top of Python 2.7.1. Code
which had originally implemented foreign keys without using REFERENCES
clauses in CREATE TABLE statements previously ran fine. Now, adding formal
I'm needing to extract domain information from stored email addresses --
something akin to the following:
SELECT DISTINCT (REGEXP_MATCHES(email, '@(.+)$'))[1] AS domain
FROM tablename
WHERE email ~ '@.+$'
While I was able to gather the information through session.execute(), I
didn't find an
I'd just like to echo Martin's statement, thank you very much. Just your
responses to this list seem like a full time job, let alone the
development to SQLAlchemy - which continues to surprise and impress me
with it's features and support.
James.
On 08/02/2011 09:28 AM, Martijn Moeling wrote
it
goes. SQL Alchemy keeps surprising me with features, it's very cool.
Cheers,
James.
Assuming you don't need that (class level behavior), you don't really need
@hybrid_property either. You can just use Python's standard @property.
If you *did* want that, it would be a little tricky
Thank you for your time.
Cheers,
James.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com.
For more
[table].add(cls)
return _mapper(cls, table, *arg, **kw)
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base(mapper=mapper)
Thanks Michael.
cheers
James
--
-- James Mills
--
-- Problems are solved by method
--
You received this message because you
Hello,
Given a scenario where you're using declarative_base(...) and defining
classes
Is there a way to ask SA what the mapper class (declarative) is for a given
table
by inspecting something in metadata[table_name] ?
cheers
James
--
You received this message because you are subscribed
Hi all,
We have a small function that helps us create a simple search query by
automatically joining on required relations if needed.
For example, consider an employee ORM that has a 1:M relationship with
addresses (for postal/physical). We can do something like:
query =
On Tue, 2010-11-30 at 11:52 -0500, Michael Bayer wrote:
On Nov 30, 2010, at 11:13 AM, James Neethling wrote:
Hi all,
We have a small function that helps us create a simple search query by
automatically joining on required relations if needed.
For example, consider an employee ORM
On Fri, 2010-11-26 at 15:41 -0500, Michael Bayer wrote:
I wouldn't say its a bug since its intentional. But I'll grant the
intention is up for debate. I've always considered usage of execute() to
mean, you're going below the level of the ORM and would like to control the
SQL interaction
I'm using SQLAlchemy 0.6.4 on top of OpenBSD utilitizing PostgreSQL 8.4.4.
As a first project, I am gathering statistics on the availability of another
Open Source project. The schema is normalized, the following SQL query
(which works at the console) to find the latest snapshot is giving me
(a SA object) to the
child thread; but to use the description, I need to detach it from the
old TG2 thread's SA session, and reattach it to the child thread's SA
session, right?
How best to do that? I don't see a way of getting the current session
from a live SA object...
Thanks!
James
--
You
realistically notice...
Thanks for the suggestion! If all else fails, I'll bastardise the
session initialisation code from TGScheduler to roll my own :)
James
On Oct 1, 3:11 pm, NiL nicolas.laura...@gmail.com wrote:
Hi,
have you considered using TGScheduler ?
--
You received this message because
Hi All,
We're looking to add tags to a number of 'entities' within our
application. A simplified data structure would be:
Image:
id
file
title
description
Article:
id
text
Tag:
id
value
Entity_tags:
id
Hello,
I have a question about using multiple polymorphic tables with different
parents which relate to the the same parent table but other polymorphic
child.
I have two tables staff (class Staff) and contract (class Contract).
The Staff table has an identity manager (class Manager) and the
Anyone heard of 4D? Probably not, but I would love to work with
SQLAlchemy and this database.
How hard is it to write a new dialect?
Anyone had luck using generic odbc (ie not mysql moduled to pyodbc) to
connect to various unsupported databases?
I've tried a couple connection strings, the
,
pwi_wildcard.scene AS pwi_wildcard_scene, pwi_wildcard.created_by AS
pwi_wildcard_created_by, pwi_wildcard.expires AS pwi_wildcard_expires
\nFROM pwi_wildcard' []
thanks in advance,
James
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post
for too much memory. This looks to me like
execute is prefetching the entire result.
Is there any way to prevent query.execute loading the entire result?
Thanks,
James
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send
On Nov 18, 3:01 pm, Michael Bayer mike...@zzzcomputing.com wrote:
On Nov 18, 2009, at 9:57 AM, James Casbon wrote:
Hi,
I'm using sqlalchemy to generate a query that returns lots of data.
The trouble is, when calling query.execute() instead of returning
the resultproxy straight away
TIMESTAMP,
PRIMARY KEY (user_id),
UNIQUE (user_name),
UNIQUE (email_address)
)
INFO ()
INFO COMMIT
...
Result in this stack trace when trying to interact with the tg_user
table during the second test:
Traceback (most recent call last):
...
File /Users/james/virtual
to a particular reference cycle
created by backrefs (I'm thinking of ways to eliminate that behavior).
On May 11, 2009, at 10:54 PM, James wrote:
Hi all,
I'm trying to track down an error where running a full TurboGears unit
test suite fails with a SQLAlchemy error, while running the single
' objects.
Many thanks,
James
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to
sqlalchemy
, merge which
is enough for most use cases.
On Feb 6, 2009, at 11:05 PM, James wrote:
Hi, I'm trying to set up a model where child objects are allowed to
not have parents. At present, I can't get SA to leave the children
intact, despite having ondelete=SET NULL and no delete-orphans
= Session()
me = User()
me.hats.extend([Hat(), Hat(), Hat()])
session.save(me)
session.flush()
print session.query(Hat).count(), hats
session.delete(me)
session.flush()
print session.query(Hat).count(), hats
Thank you!
James
--~--~-~--~~~---~--~~
You received
the docstring for it which describes some various behaviors you'll
want to be aware of.
alternatively, any SQL expression, like table.update(), UPDATE
table can be issued within the ORM's transaction using
session.execute().
On Jan 24, 2009, at 7:14 PM, James wrote:
Hi
'})?
Thanks!
James
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to
sqlalchemy+unsubscr
, at 9:23 PM, James wrote:
I'm using SA underneath a TurboGears 1.0 app. Upgrading SA from 0.4.3
to 0.4.4 causes previously passing unit tests to fail when run in
conjunction with nose's coverage plugin -- I've included an example
stack trace below.
The unit tests run just fine when
anyone
suggest a good place for me to start debugging?
Thanks!
James
Stacktrace:
Traceback (most recent call last):
File /Users/james/virtual/queue/src/pull_client/tests/
test_core_agent.py, line 29, in setUp
self.search_server = model.SearchServer('test_internal',
'test_external
1 - 100 of 162 matches
Mail list logo