On Dec 17, 2013, at 1:35 PM, Paul Moore <p.f.mo...@gmail.com> wrote:

> I'm trying to load data into a database from a JSON data source. In theory, 
> my data (mostly) matches my database table schema - the data is a list of 
> dictionaries, with keys for the various columns. But, as the source data is 
> JSON, I can't be 100% certain it doesn't contain oddities. Also, some fields 
> contain string representations where I actually want particular data types 
> (datetime values stored as ISO format strings, for example).
> 
> My current prototype code simply picks out the various fields by hand, using 
> rec.get(name, default) to deal with possibly missing fields, and then does a 
> conversion where needed. But it all seems very repetitive, given that I have 
> a database schema with all the column names and types available to me. I 
> could write some code to introspect the table definition, strip out the 
> unneeded fields, apply type conversions and default values, etc, but it's 
> going to be fiddly to get right and it seems to me to be a fairly common 
> requirement for loading data into databases. So I wondered if I'd missed 
> something built into SQLAlchemy that already did something like this.
> 
> Does anyone know of an existing solution, before I end up writing one of my 
> own?

typically you should write marshaling code here using a schema library - though 
I’ve not used it, I’d recommend colander for this: 
http://docs.pylonsproject.org/projects/colander/en/latest/

the schema you create in Colander will convert the incoming JSON into a known 
structure, which you can then pass in to populate your object model - the 
object model then gets flushed to the database.


Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to