Re: [dabo-users] Design Question

2009-09-13 Thread John
On Saturday 12 September 2009 12:08:54 pm Jeff Johnson wrote:
 I would like to express an opinion regarding MSSql vs. PostgreSQL just
 in case someone might be interested.  I created two databases based on
 an actual MSSql database that I will be pulling from in production.  One
 in MSSql and one in PostgreSQL.  Other than VFP I have done quite a bit
 of experience accessing MSSql but I had no experience with PostgreSQL.
 After a couple of days I feel I know PostgreSQL much better than the
 other.  It is almost intuitive and very easy to work with (maybe from my
 VFP experience).  Because it is open source it seems to be a *lot*
 easier to get questions answered.  My analogy would be a Cadillac vs. a
 Miata.  One has all of the bells and whistles and the other just plain
 works and is a lot more fun to drive.

I actually got started with Postgres while using VFP as a front end client.  
That was a very long time ago.   The app is still running today with little 
to no maintainence.  I never got into MySQL and can't really comment.  But 
IMO Postgres is the best.


Johnf


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200909131238.53762.jfabi...@yolo.com


Re: [dabo-users] Design Question

2009-09-13 Thread Jeff Johnson
John wrote:
 On Saturday 12 September 2009 12:08:54 pm Jeff Johnson wrote:
 I would like to express an opinion regarding MSSql vs. PostgreSQL just
 in case someone might be interested.  I created two databases based on
 an actual MSSql database that I will be pulling from in production.  One
 in MSSql and one in PostgreSQL.  Other than VFP I have done quite a bit
 of experience accessing MSSql but I had no experience with PostgreSQL.
 After a couple of days I feel I know PostgreSQL much better than the
 other.  It is almost intuitive and very easy to work with (maybe from my
 VFP experience).  Because it is open source it seems to be a *lot*
 easier to get questions answered.  My analogy would be a Cadillac vs. a
 Miata.  One has all of the bells and whistles and the other just plain
 works and is a lot more fun to drive.
 
 I actually got started with Postgres while using VFP as a front end client.  
 That was a very long time ago.   The app is still running today with little 
 to no maintainence.  I never got into MySQL and can't really comment.  But 
 IMO Postgres is the best.
 
 
 Johnf

John:  I don't know if you noticed but I was talking about Microsoft SQL 
Server.  I have not used MySQL because the licensing confused me and 
didn't seem to be a fit for my customers.

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4aad5e60.8030...@dcsoftware.com


Re: [dabo-users] Design Question

2009-09-13 Thread John
On Sunday 13 September 2009 02:04:32 pm Jeff Johnson wrote:
 John:  I don't know if you noticed but I was talking about Microsoft SQL
 Server.  I have not used MySQL because the licensing confused me and
 didn't seem to be a fit for my customers.

Just thinking about the other popular DB's.  So I thought I'd say something.

Johnf


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200909131407.05621.jfabi...@yolo.com


Re: [dabo-users] Design Question

2009-09-13 Thread Jeff Johnson
John wrote:
 On Sunday 13 September 2009 02:04:32 pm Jeff Johnson wrote:
 John:  I don't know if you noticed but I was talking about Microsoft SQL
 Server.  I have not used MySQL because the licensing confused me and
 didn't seem to be a fit for my customers.
 
 Just thinking about the other popular DB's.  So I thought I'd say something.
 
 Johnf

John:  Like I said, as popular as MySQL is with everyone I had a hard 
time figuring out how I can develop applications with a database and 
deliver a complete product with MySQL.

VFP lets you distribute with a runtime, MS SQL Server has CAL's, and 
PostgreSQL is unrestricted.  Those are all arrangements I can deal with.

Thanks again for your input.

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4aad67c1.9050...@dcsoftware.com


Re: [dabo-users] Design Question

2009-09-12 Thread Jeff Johnson
John wrote:
 On Friday 11 September 2009 07:03:07 pm Jeff Johnson wrote:
 I am using Dabo to access a MSSQL Server and a PostgreSQL Server.  It
 was pretty easy to get two connections going.  PostgreSQL is my app
 connection.  Is it possible to do a select statement on a table from a
 database in one connection and a table from a database in another
 connection?

 Here is what I want to do:

 SELECT * from mstable ;
   where not exists(select * from pgtable ;
   where mstable.pk = pgtable.pk)

 The purpose is to find records in the MS table that don't exist in the
 PG table so I can add them.
 
 IMO - no
 
 Johnf
 

It was worth a try.  ;^)  I am thinking getting data sets from both 
databases and comparing them will do the trick.

Thanks,

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4aabad8a.3060...@dcsoftware.com


Re: [dabo-users] Design Question

2009-09-12 Thread John
On Saturday 12 September 2009 07:17:46 am Jeff Johnson wrote:
 John wrote:
  On Friday 11 September 2009 07:03:07 pm Jeff Johnson wrote:
  I am using Dabo to access a MSSQL Server and a PostgreSQL Server.  It
  was pretty easy to get two connections going.  PostgreSQL is my app
  connection.  Is it possible to do a select statement on a table from a
  database in one connection and a table from a database in another
  connection?
 
  Here is what I want to do:
 
  SELECT * from mstable ;
where not exists(select * from pgtable ;
where mstable.pk = pgtable.pk)
 
  The purpose is to find records in the MS table that don't exist in the
  PG table so I can add them.
 
  IMO - no
 
  Johnf

 It was worth a try.  ;^)  I am thinking getting data sets from both
 databases and comparing them will do the trick.

 Thanks,

That was just my opinion - I could be wrong.  You might want to experiment 
with it.  Consider that fact that each bizobj has a possible different 
connection.  That means Dabo will use the correct connection to access the 
data and put the data into a cursor.  But you are asking the SQL to do the 
work and not Dabo.  So how can the SQL be passed correctly to both DB's?  

Now working with dataset's is a horse of a different color.  I think you might 
get that to work.  In fact I do something similar when I work with DBF's and 
Postgres.  I get the DBF into a dataset and work with the data until I pass 
it to Postgres via the datasets.

Johnf

Johnf


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200909120726.55401.jfabi...@yolo.com


Re: [dabo-users] Design Question

2009-09-12 Thread Jeff Johnson

John wrote:
 On Saturday 12 September 2009 07:17:46 am Jeff Johnson wrote:
 John wrote:
 On Friday 11 September 2009 07:03:07 pm Jeff Johnson wrote:
 I am using Dabo to access a MSSQL Server and a PostgreSQL Server.  It
 was pretty easy to get two connections going.  PostgreSQL is my app
 connection.  Is it possible to do a select statement on a table from a
 database in one connection and a table from a database in another
 connection?

 Here is what I want to do:

 SELECT * from mstable ;
   where not exists(select * from pgtable ;
   where mstable.pk = pgtable.pk)

 The purpose is to find records in the MS table that don't exist in the
 PG table so I can add them.
 IMO - no

 Johnf
 It was worth a try.  ;^)  I am thinking getting data sets from both
 databases and comparing them will do the trick.

 Thanks,
 
 That was just my opinion - I could be wrong.  You might want to experiment 
 with it.  Consider that fact that each bizobj has a possible different 
 connection.  That means Dabo will use the correct connection to access the 
 data and put the data into a cursor.  But you are asking the SQL to do the 
 work and not Dabo.  So how can the SQL be passed correctly to both DB's?  
 
 Now working with dataset's is a horse of a different color.  I think you 
 might 
 get that to work.  In fact I do something similar when I work with DBF's and 
 Postgres.  I get the DBF into a dataset and work with the data until I pass 
 it to Postgres via the datasets.
 
 Johnf
 
Thanks John.  Another thing occurred to me.  I can get a dataset out of 
the MSSQL table and then try to insert the record into the PostgreSQL 
table in a try catch block.  Most of the tables I am working with have a 
unique id which is not the primary key and some have three fields I can 
combine to check for uniqueness.  If the record is already in the 
PostgreSQL table the insert will fail and I can ignore it.

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4aabc1cc.3010...@dcsoftware.com


Re: [dabo-users] Design Question

2009-09-12 Thread Ed Leafe
On Sep 11, 2009, at 9:03 PM, Jeff Johnson wrote:

 I am using Dabo to access a MSSQL Server and a PostgreSQL Server.  It
 was pretty easy to get two connections going.  PostgreSQL is my app
 connection.  Is it possible to do a select statement on a table from a
 database in one connection and a table from a database in another
 connection?

 Here is what I want to do:

 SELECT * from mstable ;
  where not exists(select * from pgtable ;
  where mstable.pk = pgtable.pk)

 The purpose is to find records in the MS table that don't exist in the
 PG table so I can add them.


As others have pointed out, you can't query across connections.

How big are the tables? If they aren't huge, you might try something  
like this: use the PostgreSQL cursor to grab all the PKs from the  
pgtable into a tuple, and then grab the mscursor's records. Something  
like this:

sql = select pk from pgtable
pgcursor.execute(sql)
pgpks = tuple([rec[pk] for rec in pgcursor.getDataSet()])

sql = select * from mstable
where mstable.pk not in %s;

mscursor.execute(sql, pgpks)


-- Ed Leafe





___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/25f09b3e-d5e0-48a5-9b65-9f59932e3...@leafe.com


Re: [dabo-users] Design Question

2009-09-12 Thread John
On Saturday 12 September 2009 09:26:34 am Ed Leafe wrote:
 On Sep 11, 2009, at 9:03 PM, Jeff Johnson wrote:
  I am using Dabo to access a MSSQL Server and a PostgreSQL Server.  It
  was pretty easy to get two connections going.  PostgreSQL is my app
  connection.  Is it possible to do a select statement on a table from a
  database in one connection and a table from a database in another
  connection?
 
  Here is what I want to do:
 
  SELECT * from mstable ;
   where not exists(select * from pgtable ;
   where mstable.pk = pgtable.pk)
 
  The purpose is to find records in the MS table that don't exist in the
  PG table so I can add them.

   As others have pointed out, you can't query across connections.

   How big are the tables? If they aren't huge, you might try something
 like this: use the PostgreSQL cursor to grab all the PKs from the
 pgtable into a tuple, and then grab the mscursor's records. Something
 like this:

 sql = select pk from pgtable
 pgcursor.execute(sql)
 pgpks = tuple([rec[pk] for rec in pgcursor.getDataSet()])

 sql = select * from mstable
 where mstable.pk not in %s;
 
 mscursor.execute(sql, pgpks)


 -- Ed Leafe

Which brings a question to mind about the size limits of lists.  Let's say the 
pgpks was over 100,000.  Would it still work?  Where would it stop working?

To get around the size issue would you use a loop using a fetchone routine?  I 
was working with only a gig or two and did not run into issues transfering 
data.  But I wondered what would cause the system to break.  

I'm just wondering what happens when dealing with large data sources.

Johnf






___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200909121000.26941.jfabi...@yolo.com


Re: [dabo-users] Design Question

2009-09-12 Thread Ed Leafe
On Sep 12, 2009, at 12:00 PM, John wrote:

 Which brings a question to mind about the size limits of lists.   
 Let's say the
 pgpks was over 100,000.  Would it still work?  Where would it stop  
 working?

There is no arbitrary limit. It's memory limited.

 To get around the size issue would you use a loop using a fetchone  
 routine?

No, since the data is already in the cursor.

 I  was working with only a gig or two and did not run into issues  
 transfering
 data.  But I wondered what would cause the system to break.
 I'm just wondering what happens when dealing with large data sources.

A record-by-record approach would not work for another reason: you're  
looking for records in the ms table that don't exist in the pg table.  
You need to have all the pg records to make that determination.


-- Ed Leafe





___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/f26b7e03-ff60-4c93-ba2d-ad993f4a9...@leafe.com


Re: [dabo-users] Design Question

2009-09-12 Thread Jeff Johnson
John wrote:
 On Saturday 12 September 2009 09:26:34 am Ed Leafe wrote:
 On Sep 11, 2009, at 9:03 PM, Jeff Johnson wrote:
 I am using Dabo to access a MSSQL Server and a PostgreSQL Server.  It
 was pretty easy to get two connections going.  PostgreSQL is my app
 connection.  Is it possible to do a select statement on a table from a
 database in one connection and a table from a database in another
 connection?

 Here is what I want to do:

 SELECT * from mstable ;
  where not exists(select * from pgtable ;
  where mstable.pk = pgtable.pk)

 The purpose is to find records in the MS table that don't exist in the
 PG table so I can add them.
  As others have pointed out, you can't query across connections.

  How big are the tables? If they aren't huge, you might try something
 like this: use the PostgreSQL cursor to grab all the PKs from the
 pgtable into a tuple, and then grab the mscursor's records. Something
 like this:

 sql = select pk from pgtable
 pgcursor.execute(sql)
 pgpks = tuple([rec[pk] for rec in pgcursor.getDataSet()])

 sql = select * from mstable
 where mstable.pk not in %s;
 
 mscursor.execute(sql, pgpks)


 -- Ed Leafe
 
 Which brings a question to mind about the size limits of lists.  Let's say 
 the 
 pgpks was over 100,000.  Would it still work?  Where would it stop working?
 
 To get around the size issue would you use a loop using a fetchone routine?  
 I 
 was working with only a gig or two and did not run into issues transfering 
 data.  But I wondered what would cause the system to break.  
 
 I'm just wondering what happens when dealing with large data sources.
 
 Johnf
 
Thanks Ed  John!  The tables are large but all have datetimes in them 
so I can select the most recent month, etc.  The mssql tables don't have 
primary keys but do have a unique field.  Ed's suggestions sounds very 
workable.

Thanks,

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4aabdbe6.9050...@dcsoftware.com


Re: [dabo-users] Design Question

2009-09-12 Thread John
On Saturday 12 September 2009 10:29:04 am Ed Leafe wrote:
 There is no arbitrary limit. It's memory limited.

Thanks

Johnf


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200909121033.27978.jfabi...@yolo.com


Re: [dabo-users] Design Question

2009-09-12 Thread Adrian Klaver
On Saturday 12 September 2009 10:35:34 am Jeff Johnson wrote:
 John wrote:
  On Saturday 12 September 2009 09:26:34 am Ed Leafe wrote:
  On Sep 11, 2009, at 9:03 PM, Jeff Johnson wrote:
  I am using Dabo to access a MSSQL Server and a PostgreSQL Server.  It
  was pretty easy to get two connections going.  PostgreSQL is my app
  connection.  Is it possible to do a select statement on a table from a
  database in one connection and a table from a database in another
  connection?
 
  Here is what I want to do:
 
  SELECT * from mstable ;
   where not exists(select * from pgtable ;
   where mstable.pk = pgtable.pk)
 
  The purpose is to find records in the MS table that don't exist in the
  PG table so I can add them.
 


 Thanks Ed  John!  The tables are large but all have datetimes in them
 so I can select the most recent month, etc.  The mssql tables don't have
 primary keys but do have a unique field.  Ed's suggestions sounds very
 workable.

 Thanks,

Do you have control of the MSSQL Server?  If so would it be possible to add a 
boolean sync field to the MSSQL table to indicate records that have been synced 
to the Postgres table. The first run through would be large but after that it 
would a matter of select ... where synced='f'.

-- 
Adrian Klaver
akla...@comcast.net


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200909121121.58472.akla...@comcast.net


Re: [dabo-users] Design Question

2009-09-12 Thread Jeff Johnson
Adrian:  That is an excellent idea and one I use all the time in my own 
tables.  Unfortunately, I only have read access to the MSSql tables. 
What I plan to do is only pull records from the MSSql tables since the 
last time the import was done - with a little overlap to make sure I get 
all of the new records.  Then I will only add new records to the 
PostgreSQL tables.

I would like to express an opinion regarding MSSql vs. PostgreSQL just 
in case someone might be interested.  I created two databases based on 
an actual MSSql database that I will be pulling from in production.  One 
in MSSql and one in PostgreSQL.  Other than VFP I have done quite a bit 
of experience accessing MSSql but I had no experience with PostgreSQL. 
After a couple of days I feel I know PostgreSQL much better than the 
other.  It is almost intuitive and very easy to work with (maybe from my 
VFP experience).  Because it is open source it seems to be a *lot* 
easier to get questions answered.  My analogy would be a Cadillac vs. a 
Miata.  One has all of the bells and whistles and the other just plain 
works and is a lot more fun to drive.

YMMV

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com

Adrian Klaver wrote:
 On Saturday 12 September 2009 10:35:34 am Jeff Johnson wrote:
 John wrote:
 On Saturday 12 September 2009 09:26:34 am Ed Leafe wrote:
 On Sep 11, 2009, at 9:03 PM, Jeff Johnson wrote:
 I am using Dabo to access a MSSQL Server and a PostgreSQL Server.  It
 was pretty easy to get two connections going.  PostgreSQL is my app
 connection.  Is it possible to do a select statement on a table from a
 database in one connection and a table from a database in another
 connection?

 Here is what I want to do:

 SELECT * from mstable ;
  where not exists(select * from pgtable ;
  where mstable.pk = pgtable.pk)

 The purpose is to find records in the MS table that don't exist in the
 PG table so I can add them.
 
 Thanks Ed  John!  The tables are large but all have datetimes in them
 so I can select the most recent month, etc.  The mssql tables don't have
 primary keys but do have a unique field.  Ed's suggestions sounds very
 workable.

 Thanks,
 
 Do you have control of the MSSQL Server?  If so would it be possible to add a 
 boolean sync field to the MSSQL table to indicate records that have been 
 synced 
 to the Postgres table. The first run through would be large but after that it 
 would a matter of select ... where synced='f'.



___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4aabf1c6.4060...@dcsoftware.com


Re: [dabo-users] Design Question

2009-09-12 Thread Adrian Klaver
On Saturday 12 September 2009 12:08:54 pm Jeff Johnson wrote:
 Adrian:  That is an excellent idea and one I use all the time in my own
 tables.  Unfortunately, I only have read access to the MSSql tables.
 What I plan to do is only pull records from the MSSql tables since the
 last time the import was done - with a little overlap to make sure I get
 all of the new records.  Then I will only add new records to the
 PostgreSQL tables.


Well it was worth a shot :)


 I would like to express an opinion regarding MSSql vs. PostgreSQL just
 in case someone might be interested.  I created two databases based on
 an actual MSSql database that I will be pulling from in production.  One
 in MSSql and one in PostgreSQL.  Other than VFP I have done quite a bit
 of experience accessing MSSql but I had no experience with PostgreSQL.
 After a couple of days I feel I know PostgreSQL much better than the
 other.  It is almost intuitive and very easy to work with (maybe from my
 VFP experience).  Because it is open source it seems to be a *lot*
 easier to get questions answered.  My analogy would be a Cadillac vs. a
 Miata.  One has all of the bells and whistles and the other just plain
 works and is a lot more fun to drive.

 YMMV


My experience also.

-- 
Adrian Klaver
akla...@comcast.net


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200909121236.56057.akla...@comcast.net


Re: [dabo-users] Design Question

2009-09-11 Thread John
On Friday 11 September 2009 07:03:07 pm Jeff Johnson wrote:
 I am using Dabo to access a MSSQL Server and a PostgreSQL Server.  It
 was pretty easy to get two connections going.  PostgreSQL is my app
 connection.  Is it possible to do a select statement on a table from a
 database in one connection and a table from a database in another
 connection?

 Here is what I want to do:

 SELECT * from mstable ;
   where not exists(select * from pgtable ;
   where mstable.pk = pgtable.pk)

 The purpose is to find records in the MS table that don't exist in the
 PG table so I can add them.

IMO - no

Johnf


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200909112041.36979.jfabi...@yolo.com


Re: [dabo-users] Design Question - Follow Up

2009-09-08 Thread Jeff Johnson
pgAdminIII was working fine.  It was creating the public schema as well 
as my schema so when I executed it I get numerous errors because public 
already existed.  I removed all of the public schema commands and I have 
a script where I could create duplicate schemas at will.  To work with a 
schema you can do a SET search_path = schema or qualify the table with 
schema.table.  Pretty slick once you get the hang of it.

Thanks everyone for your help.


John wrote:
 On Saturday 05 September 2009 07:54:04 am Jeff Johnson wrote:
 Ed:  Thanks for looking this up.  I have been fighting with pg_dump all
 day.  It is not perfect.  It creates several lines that are not
 understood when creating the new schema (syntax errors) and I don't know
 enough yet to fix all of them.

 But I will keep flogging away at it.
 
 
 I use pgAdmin3 for such duties.  But there are many such admin type programs 
 available.  After you create the first database you can use the tools to 
 create others by copying sql commands.  
 
 
 On a side note please let us know how well the dbPostgres.py does with 
 multi-schemas.   When I was testing it worked well but I have not used 
 mult-schemas in my personal projects since my early testing.
 
 
 Johnf

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4aa6aab2.8000...@dcsoftware.com


Re: [dabo-users] Design Question - Follow Up

2009-09-08 Thread John
I'm sure you already discovered that you can right click on any object and 
only backup, create the sql for the object.  Just that you might have missed 
it.

Johnf
On Tuesday 08 September 2009 12:04:18 pm Jeff Johnson wrote:
 pgAdminIII was working fine.  It was creating the public schema as well
 as my schema so when I executed it I get numerous errors because public
 already existed.  I removed all of the public schema commands and I have
 a script where I could create duplicate schemas at will.  To work with a
 schema you can do a SET search_path = schema or qualify the table with
 schema.table.  Pretty slick once you get the hang of it.

 Thanks everyone for your help.

 John wrote:
  On Saturday 05 September 2009 07:54:04 am Jeff Johnson wrote:
  Ed:  Thanks for looking this up.  I have been fighting with pg_dump all
  day.  It is not perfect.  It creates several lines that are not
  understood when creating the new schema (syntax errors) and I don't know
  enough yet to fix all of them.
 
  But I will keep flogging away at it.
 
  I use pgAdmin3 for such duties.  But there are many such admin type
  programs available.  After you create the first database you can use the
  tools to create others by copying sql commands.
 
 
  On a side note please let us know how well the dbPostgres.py does with
  multi-schemas.   When I was testing it worked well but I have not used
  mult-schemas in my personal projects since my early testing.
 
 
  Johnf




___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200909081206.09236.jfabi...@yolo.com


Re: [dabo-users] Design Question - Follow Up

2009-09-08 Thread Jeff Johnson
John:  I did miss it!  It would have saved me a lot of time, but I think 
my flogging helped me understand PostgreSQL better.

Thanks!

John wrote:
 I'm sure you already discovered that you can right click on any object and 
 only backup, create the sql for the object.  Just that you might have missed 
 it.
 
-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4aa6b30e.7020...@dcsoftware.com


Re: [dabo-users] Design Question

2009-09-05 Thread Jeff Johnson
Ed:  Thanks for looking this up.  I have been fighting with pg_dump all 
day.  It is not perfect.  It creates several lines that are not 
understood when creating the new schema (syntax errors) and I don't know 
enough yet to fix all of them.

But I will keep flogging away at it.

Ed Leafe wrote:
 On Sep 4, 2009, at 7:38 PM, Jeff Johnson wrote:
 
 Thanks Ed!  I have been trying to create sql from an existing schema  
 to
 create another one of a different name with no success.  I did  
 manage to
 move everything out of the public schema to a specific one, but now I
 want to clone the specific schema for multiple sites.
 
   A quick Google showed several helpful pages; this one looked  
 especially close to what you want:
 
 http://archives.postgresql.org/pgsql-novice/2006-07/msg00212.php
 ( -or- http://j.mp/iAcT4 )
 
 
 -- Ed Leafe

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4aa27b8c.3060...@dcsoftware.com


Re: [dabo-users] Design Question

2009-09-05 Thread John
On Saturday 05 September 2009 07:54:04 am Jeff Johnson wrote:
 Ed:  Thanks for looking this up.  I have been fighting with pg_dump all
 day.  It is not perfect.  It creates several lines that are not
 understood when creating the new schema (syntax errors) and I don't know
 enough yet to fix all of them.

 But I will keep flogging away at it.


I use pgAdmin3 for such duties.  But there are many such admin type programs 
available.  After you create the first database you can use the tools to 
create others by copying sql commands.  


On a side note please let us know how well the dbPostgres.py does with 
multi-schemas.   When I was testing it worked well but I have not used 
mult-schemas in my personal projects since my early testing.


Johnf




___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200909050802.03876.jfabi...@yolo.com


Re: [dabo-users] Design Question

2009-09-04 Thread Ed Leafe
On Sep 4, 2009, at 6:59 PM, Jeff Johnson wrote:

 My customers have multiple sites.  My VFP apps have the dbc and tables
 in a folder for each site.  So if the data is on a data file server  
 each
 site has their own path.

 Now I am using PostgreSQL and was wondering what the best approach was
 to doing the same thing.

 Should I create a different schema for each site within one database?

 or should I create a database for each site?

Schemas are the way to go. You can always create reports across  
schemas in the same database, but selecting across different databases  
is not supported.


-- Ed Leafe





___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/7029a19e-d8db-4358-83b6-6d216e704...@leafe.com


Re: [dabo-users] Design Question

2009-09-04 Thread Jeff Johnson
Thanks Ed!  I have been trying to create sql from an existing schema to 
create another one of a different name with no success.  I did manage to 
move everything out of the public schema to a specific one, but now I 
want to clone the specific schema for multiple sites.

I realize this is not in the scope of Dabo, but you guys may have an 
answer for me before I go searching.

Thanks,

Ed Leafe wrote:
 On Sep 4, 2009, at 6:59 PM, Jeff Johnson wrote:
 
 My customers have multiple sites.  My VFP apps have the dbc and tables
 in a folder for each site.  So if the data is on a data file server  
 each
 site has their own path.

 Now I am using PostgreSQL and was wondering what the best approach was
 to doing the same thing.

 Should I create a different schema for each site within one database?

 or should I create a database for each site?
 
   Schemas are the way to go. You can always create reports across  
 schemas in the same database, but selecting across different databases  
 is not supported.
 
 
 -- Ed Leafe
 
 
-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4aa1b2f8.8010...@dcsoftware.com


Re: [dabo-users] Design Question

2009-09-04 Thread Ed Leafe
On Sep 4, 2009, at 7:38 PM, Jeff Johnson wrote:

 Thanks Ed!  I have been trying to create sql from an existing schema  
 to
 create another one of a different name with no success.  I did  
 manage to
 move everything out of the public schema to a specific one, but now I
 want to clone the specific schema for multiple sites.

A quick Google showed several helpful pages; this one looked  
especially close to what you want:

http://archives.postgresql.org/pgsql-novice/2006-07/msg00212.php
( -or- http://j.mp/iAcT4 )


-- Ed Leafe





___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/8efbd1dd-5890-4b75-8ad8-0006c1ef2...@leafe.com


Re: [dabo-users] Design Question

2009-08-29 Thread John
On Friday 28 August 2009 02:14:46 pm Jeff Johnson wrote:
 Ed, Paul  John:  Thanks for your input.  I have another question about
 PostgreSQL, do you need to close the connection when you are done?

Yes.  However, to be honest I don't worry about it much as I'm using a Dabo 
front end.

Johnf


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200908290603.31930.jfabi...@yolo.com


Re: [dabo-users] Design Question

2009-08-29 Thread Ed Leafe
On Aug 28, 2009, at 5:11 PM, Jeff Johnson wrote:

 Ed:  At the risk of being even more annoying, that is the way I  
 spelled
 it.   ;^)  ...but I will not capitalize the G any more.

You forget that Python is case-sensitive. IOW, g != G.

;-)


-- Ed Leafe





___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/6afb2219-6a01-4195-943c-0b7ebf733...@leafe.com


Re: [dabo-users] Design Question

2009-08-29 Thread Jeff Johnson
Ed:  I'm learning!  Especially when working on Linux.  When I first 
started using Linux I didn't pay attention to naming folders and then I 
couldn't find them in my programs.  I catch on quick though.

I actually appreciate you bringing the mis-spelling to my attention 
because it could have bit me somewhere down the road.

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com

Ed Leafe wrote:
 On Aug 28, 2009, at 5:11 PM, Jeff Johnson wrote:
 
 Ed:  At the risk of being even more annoying, that is the way I  
 spelled
 it.   ;^)  ...but I will not capitalize the G any more.
 
   You forget that Python is case-sensitive. IOW, g != G.
 
   ;-)
 
 
 -- Ed Leafe
 
 



___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4a993801.70...@dcsoftware.com


Re: [dabo-users] Design Question

2009-08-29 Thread Ed Leafe
On Aug 29, 2009, at 10:15 AM, Jeff Johnson wrote:

 Ed:  I'm learning!  Especially when working on Linux.  When I first
 started using Linux I didn't pay attention to naming folders and  
 then I
 couldn't find them in my programs.  I catch on quick though.


PostgreSQL is an even weirder beast. By default, it's case- 
insensitive, but if you double-quote things, it treats them as case- 
sensitive.

To call this annoying would be a huge understatement.


-- Ed Leafe





___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/fc2709f0-f326-418d-9115-b44d0ade4...@leafe.com


Re: [dabo-users] Design Question

2009-08-29 Thread Adrian Klaver
On Saturday 29 August 2009 7:41:47 am Ed Leafe wrote:
 On Aug 29, 2009, at 10:15 AM, Jeff Johnson wrote:
  Ed:  I'm learning!  Especially when working on Linux.  When I first
  started using Linux I didn't pay attention to naming folders and
  then I
  couldn't find them in my programs.  I catch on quick though.

   PostgreSQL is an even weirder beast. By default, it's case-
 insensitive, but if you double-quote things, it treats them as case-
 sensitive.

   To call this annoying would be a huge understatement.


 -- Ed Leafe

This is actually done to the SQL standard. Where Postgres breaks from the 
standard is folding unquoted identifiers to lower case instead of upper case.



-- 
Adrian Klaver
akla...@comcast.net


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200908291106.21224.akla...@comcast.net


Re: [dabo-users] Design Question

2009-08-28 Thread Paul McNett
Jeff Johnson wrote:
 Greetings:  I am putting together a procedure to read MSSql data and 
 write to a PostGreSQL table.  The PostGreSQL table will be massaged and 
 reformatted to their final tables.  The process will be done daily or 
 weekly and involve no more that 2000 records - usually quite a bit less.
 
 Two questions,
 
 Would you recommend Dabo business objects for this or use an import / 
 export procedure for just this purpose?
 
 Would you recommend PostGreSQL temporary tables for the intermediate 
 data or Sqlite or a Dabo cursor?
 
 I am replacing a VFP process where I turn the MSSql result cursor into a 
 free table and then massage it and import it into VFP tables.

Sounds like purely a db-layer transformation. I may not even use Dabo for this, 
depending on how simple the need is.

Paul



___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4a9812f0@ulmcnett.com


Re: [dabo-users] Design Question

2009-08-28 Thread Ed Leafe
On Aug 28, 2009, at 1:28 PM, Jeff Johnson wrote:

 Greetings:  I am putting together a procedure to read MSSql data and
 write to a PostGreSQL table.  The PostGreSQL table will be massaged  
 and
 reformatted to their final tables.  The process will be done daily or
 weekly and involve no more that 2000 records - usually quite a bit  
 less.

At the risk of being annoying, it's spelled 'PostgreSQL'.

 Two questions,

 Would you recommend Dabo business objects for this or use an import /
 export procedure for just this purpose?

You could use bizobjs if you were doing a lot of data massaging, but  
for basic stuff I would dump the data and then import it into  
PostgreSQL, and then maybe do some post-import cleaning up.

 Would you recommend PostGreSQL temporary tables for the intermediate
 data or Sqlite or a Dabo cursor?

Without any idea of the size/scope of the data and transformations  
needed, it's hard to say. In general, though, I would dump the MS SQL  
into a text file, read that into a Postgres table, and then manipulate  
the data there with update statements.


-- Ed Leafe





___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/8595343c-4f41-43a6-b65e-71c13f75a...@leafe.com


Re: [dabo-users] Design Question

2009-08-28 Thread John
On Friday 28 August 2009 10:28:27 am Jeff Johnson wrote:
 Greetings:  I am putting together a procedure to read MSSql data and
 write to a PostGreSQL table.  The PostGreSQL table will be massaged and
 reformatted to their final tables.  The process will be done daily or
 weekly and involve no more that 2000 records - usually quite a bit less.

 Two questions,

 Would you recommend Dabo business objects for this or use an import /
 export procedure for just this purpose?

 Would you recommend PostGreSQL temporary tables for the intermediate
 data or Sqlite or a Dabo cursor?

 I am replacing a VFP process where I turn the MSSql result cursor into a
 free table and then massage it and import it into VFP tables.

 TIA,

I have done this several times.  That is converting data from MsSQL to 
Postgres.  I found that using Dabo (for the UI) along with Dabo's biz objects 
works perfectly.  I have two connections one that accesses the MsSQL data and 
the other Postgres.  I just gather the data from the MsSQL tables as 
required.  Then insert into Postgres as required.  I was able to transfer 
several GBytes without any issues.  Fast too.  I even created the tables 
dynamicly.  

I see no need for a Postgres temp cursor.  Just insert.  

Note that reserved words in postgres may trip you up (like name, state).  They 
did for me.  

If you have other questions let me know.

Johnf



___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: 
http://leafe.com/archives/byMID/200908281300.11688.jfabi...@yolo.com


Re: [dabo-users] Design Question

2009-08-28 Thread Jeff Johnson
Ed:  At the risk of being even more annoying, that is the way I spelled 
it.   ;^)  ...but I will not capitalize the G any more.

jEFF

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com

Ed Leafe wrote:
 On Aug 28, 2009, at 1:28 PM, Jeff Johnson wrote:
 
 Greetings:  I am putting together a procedure to read MSSql data and
 write to a PostGreSQL table.  The PostGreSQL table will be massaged  
 and
 reformatted to their final tables.  The process will be done daily or
 weekly and involve no more that 2000 records - usually quite a bit  
 less.
 
   At the risk of being annoying, it's spelled 'PostgreSQL'.
 




___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4a984818.8000...@dcsoftware.com


Re: [dabo-users] Design Question

2009-08-28 Thread Jeff Johnson
Ed, Paul  John:  Thanks for your input.  I have another question about 
PostgreSQL, do you need to close the connection when you are done?

-- 
Jeff

Jeff Johnson
j...@dcsoftware.com
Phoenix Python User Group - sunpigg...@googlegroups.com


Ed Leafe wrote:
 On Aug 28, 2009, at 1:28 PM, Jeff Johnson wrote:
 
 Greetings:  I am putting together a procedure to read MSSql data and
 write to a PostGreSQL table.  The PostGreSQL table will be massaged  
 and
 reformatted to their final tables.  The process will be done daily or
 weekly and involve no more that 2000 records - usually quite a bit  
 less.



___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/4a9848c6.1080...@dcsoftware.com


Re: [dabo-users] Design Question

2008-01-23 Thread Jeff Johnson
Thanks Ed, Paul and Nate!  You've given me plenty of great ideas.

Jeff

Jeff Johnson
[EMAIL PROTECTED]
SanDC, Inc.
623-582-0323
Fax 623-869-0675

Jeff Johnson wrote:
 I have a Quickbooks import program that takes a .csv from the bank and 
 creates an .iif Quickbooks import file.  It works great except I have to 
 reclassify each entry from a default account to the appropriate account. 
   I would like to reclassify as many of the entries prior to import as I 
 can.  Here is the logic:
 
   d = {'Safeway':'Groceries', 'Albertson':'Groceries', 
 'HomeDepot':'Office Maintenance']
 
 The descriptions contain the keys from the dictionary so I need to do a 
 scan through all of the keys to see if they are in the description. 
 Something like this:  (I'm just thinking so ignore syntax)
  s = description
  vendors = d.keys()
   for k in vendors:
  if k in s:
  account = d(k)
  else:
  pass
 
  if account:
  s = account
 
 This would mean scrolling through all of the keys to check to see if it 
 is in the description - and do it for each record.  Not very efficient 
 although it would work.
 
 Might there be a better way?
 
 
   


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/dabo-users/[EMAIL PROTECTED]


Re: [dabo-users] Design Question

2008-01-23 Thread Ed Leafe
On Jan 22, 2008, at 10:49 PM, Nate Lowrie wrote:

  if not description.find(vendor) == -1:

Simpler:

if vendor in description:

-- Ed



___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/dabo-users/[EMAIL PROTECTED]


Re: [dabo-users] Design Question

2008-01-23 Thread Jeff Johnson
Paul:  I use different debit cards for business and personal and I 
rarely purchase different things at different places like you mentioned 
below.  I was an accountant in an earlier life and this is natural for 
me.  It only takes a couple of hours to do all of my accounting for the 
whole year.  I used to use FoxPro to create my Quickbooks files, but in 
October I switched over to Python.

Jeff

Jeff Johnson
[EMAIL PROTECTED]
SanDC, Inc.
623-582-0323
Fax 623-869-0675

 
 Now, what about cases where you don't always purchase the same types of 
 things from the same place? For instance, we purchase prescription 
 medicine *and* groceries from Nob Hill. Others purchase groceries *or* 
 gasoline from Safeway
 
 Paul
 


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/dabo-users/[EMAIL PROTECTED]


Re: [dabo-users] Design Question

2008-01-23 Thread Nate Lowrie
On Jan 23, 2008 8:30 AM, Ed Leafe [EMAIL PROTECTED] wrote:
 On Jan 22, 2008, at 10:49 PM, Nate Lowrie wrote:

   if not description.find(vendor) == -1:

 Simpler:

 if vendor in description:

Well, that's nifty...I didn't realize that I could do that...Thanks Ed.

Nate L.


___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/dabo-users/[EMAIL PROTECTED]


Re: [dabo-users] Design Question

2008-01-22 Thread Ed Leafe
On Jan 22, 2008, at 7:28 PM, Jeff Johnson wrote:

 I have a Quickbooks import program that takes a .csv from the bank and
 creates an .iif Quickbooks import file.  It works great except I  
 have to
 reclassify each entry from a default account to the appropriate  
 account.
 I would like to reclassify as many of the entries prior to import as I
 can.  Here is the logic:

   d = {'Safeway':'Groceries', 'Albertson':'Groceries',
 'HomeDepot':'Office Maintenance']

 The descriptions contain the keys from the dictionary so I need to  
 do a
 scan through all of the keys to see if they are in the description.
 Something like this:  (I'm just thinking so ignore syntax)
s = description
vendors = d.keys()
   for k in vendors:
if k in s:
account = d(k)
else:
pass

if account:
s = account

If you have a dict that maps the bank names (keys) to Quicken  
accounts (values), all you need to do is:

try:
account = d[description]# Note: *square* brackets!
except KeyError:
account = None
if account:
s = account

In other words, just reference the key, and trap the KeyError  
exception for those that do not match.

BTW, to test if a particular item is a key in a dict, all you need is  
the 'in' operator. E.g.:

dct = {A: foo, B: bar, 44: 88}
print A in dct
-- True
print 42 in dct
-- False
print bar in dct
-- False # Only works for keys

-- Ed



___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/dabo-users/[EMAIL PROTECTED]


Re: [dabo-users] Design Question

2008-01-22 Thread Jeff Johnson
Ed:  The trouble is that the key will look like this 'Safeway' and the 
description will look like this '11222llxxxSafewayBlahBlahBlah'

I need to see if Safeway is in that description.

Jeff

Jeff Johnson
[EMAIL PROTECTED]
SanDC, Inc.
623-582-0323
Fax 623-869-0675

Ed Leafe wrote:
 On Jan 22, 2008, at 7:28 PM, Jeff Johnson wrote:
 
 I have a Quickbooks import program that takes a .csv from the bank and
 creates an .iif Quickbooks import file.  It works great except I  
 have to
 reclassify each entry from a default account to the appropriate  
 account.
 I would like to reclassify as many of the entries prior to import as I
 can.  Here is the logic:

  d = {'Safeway':'Groceries', 'Albertson':'Groceries',
 'HomeDepot':'Office Maintenance']

 The descriptions contain the keys from the dictionary so I need to  
 do a
 scan through all of the keys to see if they are in the description.
 Something like this:  (I'm just thinking so ignore syntax)
s = description
vendors = d.keys()
  for k in vendors:
if k in s:
account = d(k)
else:
pass

if account:
s = account
 
   If you have a dict that maps the bank names (keys) to Quicken  
 accounts (values), all you need to do is:
 
 try:
   account = d[description]# Note: *square* brackets!
 except KeyError:
   account = None
 if account:
   s = account
 
 In other words, just reference the key, and trap the KeyError  
 exception for those that do not match.
 
   BTW, to test if a particular item is a key in a dict, all you need is  
 the 'in' operator. E.g.:
 
 dct = {A: foo, B: bar, 44: 88}
 print A in dct
 -- True
 print 42 in dct
 -- False
 print bar in dct
 -- False # Only works for keys
 
 -- Ed
 
 
 
[excessive quoting removed by server]

___
Post Messages to: Dabo-users@leafe.com
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/dabo-users/[EMAIL PROTECTED]